Two instances of Modernizr on one page - css

I'm currently trying to make some speed improvements to one of my sites, and I'm looking at Modernizer usage.
Previously all of my javascript (including Modernizer) was lumped into one big js file. I've now removed Modernizer and it sits inline in the head section of the page. For clarity, it is a custom build.
However, not all feature detects are equal - some features benefit from being detected quickly while others can wait.
For instance, detecting webp support is pretty important, because I assume downloading a jpeg then another webp version sort of defeats the object of the feature.
Then, there are things like pointer/touch support, which don't affect layout as such and are more to do with interaction - so they can wait.
With that in mind, the obvious thing is to put two instances of Modernizer in the page - one for the important stuff at the top, and one for the rest at the bottom.
However, I've been unable to find anything on this topic. I guess that leads me to ask two questions: is it possible? And if it is - is it a sensible idea?

It definitively is possible to have two instances of Modernizr on one page; but in order to have that you have to manually rename the global object to something else since Modernizr is exposed to window directly:
e.Modernizr = Modernizr // e is internal ref. to window object
}(window, document);
This however may be considered a dirty patch since you have to alter the production code (& to maintain that alteration through update cycles manually), download and execute exactly the same basic functionality twice, which is less optimal.
Another approach would be to build all that is needed immediately for first batch of (essential) tests and than to utilize Modernizr.addTest (it has to be included in build) later on, for non-essential functionality.
Source and doc-like comments.
Of course, you'd have to write your tests. You may relay on official Modernizr tests, but addTest called outside of the Modernizr's factory method lacks some useful methods (for example, things like Modernizr's internal createElement()).
You have to make choices since there is no way to subsequently add other tests out of the box.

Related

Benefits of using pyuic vs uic.loadUi

I am currently working with python and Qt which is kind of new for me coming from the C++ version and I realised that in the oficial documentation it says that an UI file can be loaded both from .ui or creating a python class and transforming the file into .py file.
I get the benefits of using .ui it is dynamically loaded so no need to transform it into python file with every change but what are the benefits of doing that?, Do you get any improvements in run time? Is it something else?
Thanks
Well, this question is dangerously near to the "Opinion-based" flag, but it's also a common one and I believe it deserves at least a partial answer.
Conceptually, both using the pyuic approach and the uic.loadUi() method are the same and behave in very similar ways, but with some slight differencies.
To better explain all this, I'll use the documentation about using Designer as a reference.
pyuic approach, or the "python object" method
This is probably the most popular method, especially amongst beginners. What it does is to create a python object that is used to create the ui and, if used following the "single inheritance" approach, it also behaves as an "interface" to the ui itself, since the ui object its instance creates has all widgets available as its attributes: if you create a push button, it will be available as ui.pushButton, the first label will be ui.label and so on.
In the first example of the documentation linked above, that ui object is stand-alone; that's a very basic example (I believe it was given just to demonstrate its usage, since it wouldn't provide a lot of interaction besides the connections created within Designer) and is not very useful, but it's very similar to the single inheritance method: the button would be self.ui.pushButton, etc.
IF the "multiple inheritance" method is used, the ui object will coincide with the widget subclass. In that case, the button will be self.pushButton, the label self.label, etc.
This is very important from the python point of view, because it means that those attribute names will overwrite any other instance attribute that will use the same name: if you have a function named "saveFile" and you name the button "saveFile", you won't have any [direct] access to that instance method any more as soon as setupUi is returned. In this case, using the single inheritance method might be helpful - but, in reality, you could just be more careful about function and object names.
Finally, if you don't know what the pyuic generated file does and what's it for, you might be inclined to use it to create your program. That is wrong for a lot of reasons, but, most importantly, because you might certainly realize at some point that you have to edit your ui, and merging the new changes with your modified code is clearly a PITA you don't want to face.
I recently answered a related question, trying to explain what happens when setupUi() is called in much more depth.
Using uic.loadUi
I'd say that this is a more "modular" approach, mostly because it's much more direct: as already pointed out in the question, you don't have to constantly regenerate the ui files each time they're modified.
But, there's a catch.
First of all: obviously the loading, parsing and building of an UI from an XML file is not as fast as creating the ui directly from code (which is exactly what the pyuic file does within setupUi()).
Then, there is at least one relatively small bug about layout contents margins: when using loadUi, the default system/form margins might be completely ignored and set to 0 if not explicitly set. There is a workaround about that, explained in Size of verticalLayout is different in Qt Designer and PyQt program (thanks to eyllanesc).
A comparison
pyuic approach
Pros:
it's faster; in a very simple test with a hundred buttons and a tablewidget with more than 1200 items I measured the following bests:
pyuic loading: 33.2ms
loadUi loading: 51.8ms
this ratio is obviously not linear for a multitude of reasons, but you can get the idea
if used with the single inheritance method, it can prevent accidental instance attribute overwritings, and it also means a more "contained" object structure
using python imports ensures a more coherent project structure, especially in the deployment process (having non-python files is a common source of problems)
the contents of those files are actually instructive, especially for beginners
Cons:
you always must remember to regenerate the python files everytime you update an ui; we all know how easy is to forget an apparently meaningless step like this might be, expecially after hours of coding: I've seen plenty of situations for which people was banging heads on desks (hopefully both theirs) for hours because of untraceable issues, before realizing that they just forgot to run pyuic or didn't run it on the right files; my own forehead still hurts ;-)
file tracking: you have to count two files for each ui, and you might forget one of them along the way when migrating/forking/etc, and if you forgot an ui file it possibly means that you have to recreate it completely from scratch
n00b alert: beginners are commonly led to think that the generated python file is the one to be used to create their programs, which is obviously wrong; unfortunately, the # WARNING! message is not clear enough (I've been almost begging the head PyQt developer about this); while this is obviously not an actual problem of this approach, right now it results in being so
some of the contents of a pyuic generated files are usually unnecessary (most importantly, the object name, which is used only for specific cases), and that's pretty obvious, since it's automatically generated ("you might need that, so better safe than sorry"); also, related to the issue above, people might be led to think that everything pyuic creates is actually needed for a GUI, resulting in unnecessary code that decreases its readability
loadUi method
Pros:
it's direct and immediate: you edit your ui on Designer, you save it (or, at least, you remember to do it...), and when you run your code it's already there; no fuss, no muss, and desks/foreheads are safe(r)
file tracking and deployment: it's just one file per ui, you can put all those ui files in a separate folder, you don't have to do anything else and you don't risk to forget something on the way
direct access to widgets (but this can be achieved using the multiple inheritance approach also)
Cons:
the layout issue mentioned above
possible instance attribute overwriting and no "ui" object "containment"
slightly slower loading
path and deployment: loading is done using os relative paths and system separators, so if you put the ui in a directory different from the py file that loads that .ui you'll have to consider that; also, some package managers use to compress everything, resulting in access errors unless paths are correctly managed
In my opinion, all considering, the loadUi method is usually the better choice. It doesn't distract me, it allows better conceptual compartmentation (which is usually good and also follows a pattern similar to MVC much more closely, conceptually speaking) and I strongly believe it as being far less prone to programmer errors, for a multitude of reasons.
But that's clearly a matter of choice.
We should also and always remember that, like every other choice we do, using ui files is an option.
There is people who completely avoids them (as there is people who uses them literally for anything), but, like everything, it all and always depends on the context.
A big benefit of using pyuic is that code autocompletion will work.
This can make programming much easier and faster.
Then there's the fact that everything loads faster.
pyuic6-Tool can be used to automate the call of pyuic6 when the application is run and only convert .ui files when they change.
It's a little bit longer to set up than just using uic.loadUi but the autocompletion is well worth it if you use something like PyCharm.

Static data storage on server-side

Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584

Web test recording: automatically insert assertions during recording?

I need to automate as much as possible the recording of Web test scenarios. Selenium IDE or better Katalon plugin for Chrome seem very effective for this. However what's missing in the recording are the assertions. I've so far found no real alternative than to "add them by hand" after the recording is done.
Now I know which parts of my pages contain relevant output text, i.e. are subject to test. For instance based on ID patterns, class names, tag hierarchy etc.
So given that my web app is in a "known good state", I could theoretically grab the text content of the relevant tags during the recording, and insert my assertions in the recorded scenario right there and then. My aim is to automate this.
Is there any way to do this in Katalon plugin, Selenium IDE or any other automated web recording tool? I've read about Katalon Extension Scripts but as far as I understand it, these cannot do what I want?
-- edit -- trying to rephrase and be more concrete --
During my recording, on certain events (e.g. on page load) I want the tool to find all elements that match certain selectors, and for each match store an assertion in the scenario that asserts the actual current value (e.g. div.innerText or input.value) of the element on the page. I want to define the events and the selectors that should trigger the insertion of assertions and the expression that defines the asserted value.
example
Suppose my webapp has a search page. I enter data in input fields, and hit the "search" button. These actions are recorded by most tools like Katalon Recorder. Now on the next page, the search results will show. Each search result will be in a div class="result". Suppose while recording I got two search results "foo" and "bar". So I want the tool to store in the scenario, while recording, an assertion that the first result should be "foo" and the second should be "bar", based on my rule that all $("div.result") should have their "innerText" asserted upon page load.
Avoid using Selenium IDE, as compatibility with Firefox has been discontinued since Firefox version 55, you will thus not be able to run your tests on recent versions of Firefox.
When performing actions in the browser, it is relatively easy to record those actions to re-run them again. It is 100% clear what button you just pressed.
You can probably do a million different assertions on a page, it would be difficult for any tool to guess which things you would like to assert and then automatically add those assertions so I would be surprised if you would find a tool that would do exactly what you want.
What is keeping you from writing your own automated tests in code from scratch? From my experience, coding your own tests is not that much slower, but once you are used to doing this you will be able to tackle more complex problems with much more ease.
I have no experience with Katalon.
You can't add assertions in recording time, but you can use Selenese after recording too.
Check official reference here: https://docs.katalon.com/display/KD/Selenese+%28Selenium+IDE%29+Commands+Reference
For what it's worth, I've managed to get what I needed as follows:
locate the Extension directory of Katalon Recorder in my Chrome
copy the entire contents to Eclipse
modify the source content/recorder.js, method Recorder.attach() by adding the following:
var self = this;
$(...).each(function(i, el) {
var target = self.locatorBuilders.buildAll(el);
if (el.tagName == "SELECT" || el.tagName == "INPUT")
recorder.record("assertValue", target, el.value, false);
else
recorder.record("assertText", target, el.innerText, false);
});
(note ... are the JQuery selectors that define the areas that I know will contain relevant data in application. This could be tweaked either in this source (e.g. by adding more selectors), or in the application itself (e.g. by adding a signaling class to certain tags in the HTML just to trigger assertions).
in chrome, activate "developer mode" and load the modified plugin.
While recording, assertions are now automatically added for the relevant parts (... in the above) of my web app, on each page load.
happy!

How to handle changes in objects' structure in automated testing?

I’m curious to know how feasible it is to get away from the dependency onto the application’s internal structure when you create an automated test case. Or you may need to rewrite the test case when a developer modifies a part of the code for a bug fix, etc.
We could write several automated test cases based on the applications internal object structure, but lets assume that the object hierarchy changes after 6 months or so, how do we approach these kind of issues?
I can't speak for other testing tools but at least in QTP's case the testing tool introduces a level of abstraction over the application so that non-functional changes in the application often (but not always) have no effect on the way the testing tool identifies the object.
For example in QTP all web elements are considered to be direct children of the document so that changes in the DOM (such as additional tables) don't change the object's description.
In TestComplete, there are a couple of ways to make sure that the changed app structure does not break you tests.
You can set up the Aliases tree of the Name Mapping feature. In this case, if the app structure is changed, you need to modify the Aliases tree appropriately and your test will stay working without requirement to modify them.
You can use the Extended Find feature of the Name Mapping in order to ignore parts of the the actual object tree and search for a needed objects on deeper levels.
This is what I was forced to do after losing all my work twice due to changes on the DOM structure:
Every single time I need to work with an object, I use the Find function with the ID of the object, searching for the object on the Page object. This way, whenever the DOM gets updated, my tests still run smoothly.
The only thing that will break my tests is if the object's ID get changed, but that's not very probable to happen.
Here you can find some examples of the helper functions I use.

Documentation Generation - What boxes should I aim to tick?

I'm looking at requiring my team to document their code more thoroughly for some major upcoming projects and to make life a little less painful, I am steering towards XML documentation generators such as Sandcastle, Doxygen or Box Live Documenter.
What are the key considerations I should keep in mind when evaluating the best option and what experiences have led you to a particular decision?
For me the key considerations would be:
Fully automated: Can it be set up in such a way so that pretty much
no outside work is required to
create or edit the documentation.
Fully styled: Can the documentation be fully styled so
that it looks great in a wiki or pdf
after it’s generated. I should be
able to change colors, font sizes,
layouts, etc.
Good Filtering: Can I select only the items I want to be
generated. I should be able to
filter the namespaces, file types,
classes, etc.
Customization: Can I include headers, footers, custom elements,
etc.
I found Doxygen could do all of this. Our workflow is as follows:
Developer makes a change to the code
They update the documentation tags right above the code they just changed
We click a generate button
Doxygen will then extract all the XML documentation from the code, filter it to only include the classes and methods we want, and apply the CSS styling we’ve pre-made for it. Our end result is an internal wiki that looks the way we want, and doesn’t require editing.
Extra: We have all our projects in various git repositories. We pull all these down to one root folder and generate the docs form this root folder..
Would be interested to know how others are automating even further..?
Who is paying for the documentation and why? (is the system stable enough, does it add enough value)
Who is going to read it, and why is she not using a more effective communication channel?
(if correct mostly distance in time/place)
Who is going to keep it up to date.
When are you going to destroy it? (Automatically if it hasn't been read or updated in the past three months?)
I mostly prefer better code to make my life less painful, over more documentation, but I like scenario & unit tests and a high level architecture description.
[edit] Documentation costs time and money to write and keep up to date. JavaDoc style documentation has a serious detrimental effect on the amount of code simultaneously visible and might be a good idea for the developers using the code, but not for those writing it.

Resources