Sometimes it is convenient to "keep going" when importing lots of content, ignoring tracebacks and other failures that may occur with certain content.
Is there any generic mechanism in Transmogrifier to make this easier? The only approaches I can see are:
Use only custom blueprints that try/except where appropriate.
Use a wrapper to execute the pipeline that changes the source blueprint input to be one-after-the-failure each time.
Neither of these appear particularly convenient or desirable, hence my question.
you only need to write one blueprint which will handle and ignore all "tracebacks" you might want. be sure to put it right after "source" blueprint and yield in try/except block.
...
def __call__(self):
for item in self.previous:
try:
yield item
except Exception, e
# here do with exception whatever you want
pass
I am aware that this is not a real workaround for that (common) issue, but here's my only solution: i use a lot of pipeline steps, each that make a single, well known, change to my items. If there's a step that i fear that can cause trouble i add a condition step (collective.transmogrifier.sections.condition) and simply drop potentially bad items. I think that a real solution could be to change the way the pipeline runner call each step, it should be responsible for managing exceptions in a customisable way. If someone else has a better solution i'm interested, me too.
Related
Basically, I want to be able to generate class definitions, compile the system, and save it for reuse. Would that involve a code walker, or is there a simpler option?
(save-lisp-and-die "isn't going to work for me")
Expanding to explain. I'm generating systems based on OpenAPI definitions, so a system roughly corresponds to an API client.
There will be dozens, if not hundreds of these.
The idea is to NOT keep them all in the image, but load at run time as required.
I see two possible routes here, and to some extent, I suspect they mainly differ in "the last mile" (as it were).
The route you seem to have settled on, run-time definition of classes and functions.
A route whereby you generate your function/class forms, but don't go the full way to get them "Live" in the image and instead emit the form(s) to a file.
I suspect that it would be possible to have most of the generating code shared between the two and for the first route have a wrapping macro that effectively returns a PROGN, and in the second calls a function to pretty-print what the macro would have returned on a stream.
Saying that, building a tailored environment and saving it to a "core" file is a pretty good way of getting excellent startup times.
I'm currently trying to make some speed improvements to one of my sites, and I'm looking at Modernizer usage.
Previously all of my javascript (including Modernizer) was lumped into one big js file. I've now removed Modernizer and it sits inline in the head section of the page. For clarity, it is a custom build.
However, not all feature detects are equal - some features benefit from being detected quickly while others can wait.
For instance, detecting webp support is pretty important, because I assume downloading a jpeg then another webp version sort of defeats the object of the feature.
Then, there are things like pointer/touch support, which don't affect layout as such and are more to do with interaction - so they can wait.
With that in mind, the obvious thing is to put two instances of Modernizer in the page - one for the important stuff at the top, and one for the rest at the bottom.
However, I've been unable to find anything on this topic. I guess that leads me to ask two questions: is it possible? And if it is - is it a sensible idea?
It definitively is possible to have two instances of Modernizr on one page; but in order to have that you have to manually rename the global object to something else since Modernizr is exposed to window directly:
e.Modernizr = Modernizr // e is internal ref. to window object
}(window, document);
This however may be considered a dirty patch since you have to alter the production code (& to maintain that alteration through update cycles manually), download and execute exactly the same basic functionality twice, which is less optimal.
Another approach would be to build all that is needed immediately for first batch of (essential) tests and than to utilize Modernizr.addTest (it has to be included in build) later on, for non-essential functionality.
Source and doc-like comments.
Of course, you'd have to write your tests. You may relay on official Modernizr tests, but addTest called outside of the Modernizr's factory method lacks some useful methods (for example, things like Modernizr's internal createElement()).
You have to make choices since there is no way to subsequently add other tests out of the box.
I am currently working with python and Qt which is kind of new for me coming from the C++ version and I realised that in the oficial documentation it says that an UI file can be loaded both from .ui or creating a python class and transforming the file into .py file.
I get the benefits of using .ui it is dynamically loaded so no need to transform it into python file with every change but what are the benefits of doing that?, Do you get any improvements in run time? Is it something else?
Thanks
Well, this question is dangerously near to the "Opinion-based" flag, but it's also a common one and I believe it deserves at least a partial answer.
Conceptually, both using the pyuic approach and the uic.loadUi() method are the same and behave in very similar ways, but with some slight differencies.
To better explain all this, I'll use the documentation about using Designer as a reference.
pyuic approach, or the "python object" method
This is probably the most popular method, especially amongst beginners. What it does is to create a python object that is used to create the ui and, if used following the "single inheritance" approach, it also behaves as an "interface" to the ui itself, since the ui object its instance creates has all widgets available as its attributes: if you create a push button, it will be available as ui.pushButton, the first label will be ui.label and so on.
In the first example of the documentation linked above, that ui object is stand-alone; that's a very basic example (I believe it was given just to demonstrate its usage, since it wouldn't provide a lot of interaction besides the connections created within Designer) and is not very useful, but it's very similar to the single inheritance method: the button would be self.ui.pushButton, etc.
IF the "multiple inheritance" method is used, the ui object will coincide with the widget subclass. In that case, the button will be self.pushButton, the label self.label, etc.
This is very important from the python point of view, because it means that those attribute names will overwrite any other instance attribute that will use the same name: if you have a function named "saveFile" and you name the button "saveFile", you won't have any [direct] access to that instance method any more as soon as setupUi is returned. In this case, using the single inheritance method might be helpful - but, in reality, you could just be more careful about function and object names.
Finally, if you don't know what the pyuic generated file does and what's it for, you might be inclined to use it to create your program. That is wrong for a lot of reasons, but, most importantly, because you might certainly realize at some point that you have to edit your ui, and merging the new changes with your modified code is clearly a PITA you don't want to face.
I recently answered a related question, trying to explain what happens when setupUi() is called in much more depth.
Using uic.loadUi
I'd say that this is a more "modular" approach, mostly because it's much more direct: as already pointed out in the question, you don't have to constantly regenerate the ui files each time they're modified.
But, there's a catch.
First of all: obviously the loading, parsing and building of an UI from an XML file is not as fast as creating the ui directly from code (which is exactly what the pyuic file does within setupUi()).
Then, there is at least one relatively small bug about layout contents margins: when using loadUi, the default system/form margins might be completely ignored and set to 0 if not explicitly set. There is a workaround about that, explained in Size of verticalLayout is different in Qt Designer and PyQt program (thanks to eyllanesc).
A comparison
pyuic approach
Pros:
it's faster; in a very simple test with a hundred buttons and a tablewidget with more than 1200 items I measured the following bests:
pyuic loading: 33.2ms
loadUi loading: 51.8ms
this ratio is obviously not linear for a multitude of reasons, but you can get the idea
if used with the single inheritance method, it can prevent accidental instance attribute overwritings, and it also means a more "contained" object structure
using python imports ensures a more coherent project structure, especially in the deployment process (having non-python files is a common source of problems)
the contents of those files are actually instructive, especially for beginners
Cons:
you always must remember to regenerate the python files everytime you update an ui; we all know how easy is to forget an apparently meaningless step like this might be, expecially after hours of coding: I've seen plenty of situations for which people was banging heads on desks (hopefully both theirs) for hours because of untraceable issues, before realizing that they just forgot to run pyuic or didn't run it on the right files; my own forehead still hurts ;-)
file tracking: you have to count two files for each ui, and you might forget one of them along the way when migrating/forking/etc, and if you forgot an ui file it possibly means that you have to recreate it completely from scratch
n00b alert: beginners are commonly led to think that the generated python file is the one to be used to create their programs, which is obviously wrong; unfortunately, the # WARNING! message is not clear enough (I've been almost begging the head PyQt developer about this); while this is obviously not an actual problem of this approach, right now it results in being so
some of the contents of a pyuic generated files are usually unnecessary (most importantly, the object name, which is used only for specific cases), and that's pretty obvious, since it's automatically generated ("you might need that, so better safe than sorry"); also, related to the issue above, people might be led to think that everything pyuic creates is actually needed for a GUI, resulting in unnecessary code that decreases its readability
loadUi method
Pros:
it's direct and immediate: you edit your ui on Designer, you save it (or, at least, you remember to do it...), and when you run your code it's already there; no fuss, no muss, and desks/foreheads are safe(r)
file tracking and deployment: it's just one file per ui, you can put all those ui files in a separate folder, you don't have to do anything else and you don't risk to forget something on the way
direct access to widgets (but this can be achieved using the multiple inheritance approach also)
Cons:
the layout issue mentioned above
possible instance attribute overwriting and no "ui" object "containment"
slightly slower loading
path and deployment: loading is done using os relative paths and system separators, so if you put the ui in a directory different from the py file that loads that .ui you'll have to consider that; also, some package managers use to compress everything, resulting in access errors unless paths are correctly managed
In my opinion, all considering, the loadUi method is usually the better choice. It doesn't distract me, it allows better conceptual compartmentation (which is usually good and also follows a pattern similar to MVC much more closely, conceptually speaking) and I strongly believe it as being far less prone to programmer errors, for a multitude of reasons.
But that's clearly a matter of choice.
We should also and always remember that, like every other choice we do, using ui files is an option.
There is people who completely avoids them (as there is people who uses them literally for anything), but, like everything, it all and always depends on the context.
A big benefit of using pyuic is that code autocompletion will work.
This can make programming much easier and faster.
Then there's the fact that everything loads faster.
pyuic6-Tool can be used to automate the call of pyuic6 when the application is run and only convert .ui files when they change.
It's a little bit longer to set up than just using uic.loadUi but the autocompletion is well worth it if you use something like PyCharm.
I’m curious to know how feasible it is to get away from the dependency onto the application’s internal structure when you create an automated test case. Or you may need to rewrite the test case when a developer modifies a part of the code for a bug fix, etc.
We could write several automated test cases based on the applications internal object structure, but lets assume that the object hierarchy changes after 6 months or so, how do we approach these kind of issues?
I can't speak for other testing tools but at least in QTP's case the testing tool introduces a level of abstraction over the application so that non-functional changes in the application often (but not always) have no effect on the way the testing tool identifies the object.
For example in QTP all web elements are considered to be direct children of the document so that changes in the DOM (such as additional tables) don't change the object's description.
In TestComplete, there are a couple of ways to make sure that the changed app structure does not break you tests.
You can set up the Aliases tree of the Name Mapping feature. In this case, if the app structure is changed, you need to modify the Aliases tree appropriately and your test will stay working without requirement to modify them.
You can use the Extended Find feature of the Name Mapping in order to ignore parts of the the actual object tree and search for a needed objects on deeper levels.
This is what I was forced to do after losing all my work twice due to changes on the DOM structure:
Every single time I need to work with an object, I use the Find function with the ID of the object, searching for the object on the Page object. This way, whenever the DOM gets updated, my tests still run smoothly.
The only thing that will break my tests is if the object's ID get changed, but that's not very probable to happen.
Here you can find some examples of the helper functions I use.
I have two different contexts on a Plone instance.
The first context has some ATFolders. The second, have ATFolders too that have to be in sync with the first context using some subscribers.
In the second context, the ATFolders have to know that they are linked to some of the folders on the first context.
I thought about using setattr in them (setattr(obj_context1, attr, obj_context2.UID())) instead of creating a new Content-Type just to have a ReferenceField attribute (or using archetype.schemaextender), since this would be too much overkill for just a single parameter in a specific context: the folders that will have this attribute will not be deleted from ZODB for example. They will have a placeful workflow with just one state. This attribute is completely hidden from the user, and the folders on the second context are programatically created, with no user intervention.
This attribute should only exist in the second context, so creating an adapter or a new content-type, just to be used in this context seems to be too much.
I'm inclined to use setattr for the sake of pragmatism on this specific scenario, but I don't know if using the setattr approach is going to haunt me in the future (performance, zodb conflicts, etc). I mean: doing an update catalog, update workflow, is this new attribute going to have a problem?
Any thoughts? Anyone experienced with setattr in this situation? This attr will and should not be visible, it's only for some control.
I don't think it's bad practice at all, I do similar things for similar situations.
You could use an attribute annotation, which would help prevent conflicts with other attributes, but that's a style and performance choice more than anything. Attribute annotations are stored in their own ZODB persistent record, so it depends on how often this attribute will change compared to the other attributes on the folder what impact this has.
Last but not least, I would probably encapsulate the behaviour in an adapter, to make the implementation flexible for future uses. You can either register the adapter to the ATFolder interface, or to IAttributeAnnotatable, depending on how much your implementation relies on what the adapted object needs to provide.
Other notes: We've also used plone.app.relations connections between objects in the past (maintained outside the object schema, like your attribute), but found five.intid (the underlying machinery .relations relies on) to be too fragile and would use simple UID attributes with catalog searches in the future.
In reference to Ross's answer, if the information in question doesn't need to be end-user editable, a schemaextender attribute is overkill.
Maybe use archetypes.schemaextender? See also this doc. This way you can use an actual ReferenceField, get all sorts of stuff for free, and spend a lot less time re-implementing said free stuff.