I have this patch developed for mailman-2.1.13, and I would like to port it to mailman-2.1.15
I never did this before, so I'm asking for advices here. How would you go about this task ? Here are my thoughts on the subject :
I could search the 2.1.15 codebase for code segments corresponding to the patch, but I would miss any new part depending on the patched code.
I could look at the diff between 2.1.13 and 2.1.15 and search for parts conflicting with the patch, at the risk of drowning into the many changes between the versions
I could simply rewrite the patch but I would need to understand all the logic of the patched application, which could be quite long ...
Your advices are welcome !
The keywords for search are "rebase", "merge" and "conflict".
A conflict occurs when different parties make changes to the same document, and the system is unable to reconcile the changes. A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other.
You would find that resolving conflicts (e.g. porting patch to newer version) is usually not trivial operation and cannot be done correctly without deep understanding of code you work with.
Really depends on the change in main code base from mailman-2.1.13 to mailman-2.1.15, but sometimes it is easier to rewrite patch from scratch, sometimes it is sufficient to merge changes from patch to new version and try to fix the conflicts/problems it makes.
I would start with finding out what was the original problem that patch wanted to solve and how it was solved. Then look to 2.1.15 code and find out if the original problem is still there and if it is possible to apply the patch directly (internals didn't change so much in meantime) or new approach must be applied.
Related
I stumbled upon an issue when I recently switched to VSCode as editor.
I have several projects that have a full (medium-complex+) autotool
setup and they all work fine. However I discovered that the makefile plugin for VSCode in order to initialize itself (and finding all dependencies and targets) starts by running
make --dry-run --always-make
as first time initialization. This throws the makefile (or actually the
re-config) into an endless loop re-running "configure" (since the targets are never resolved to disk).
I have also confirmed this behavior with the smallest possible autoconf/automake
setup. I can also kind-of understand why this happens (and it seems make
have an internal way to discover this exact situation with the special
variable MAKE_RESTARTS that could possible be used to detect a cyclic
behavior)
Is there a known best-practice workaround ? or is it even a reasonable expectations that these two options in combination should work? (Good to have a second opinion before I go down the rabbit-hole of reminding myself of all the details I forgot about the magical land of autotools)?
Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584
My package introduces registry entries. Changes by site administrator should not be overwritten on reinstall of the package.
Many ways to Rome. I chose ftw.upgrade. I like the declarative way of the upgrade step syntax. Its possible to use an upgrade directory for generic setup xml-Files like propertiestool.xml. No need to define handler python code. The upgrade works well. The admin can upgrade from control panel and in my case the new property is added. Insomma: For a new property just these have to be added: an upgrade-step declaration for source and destination version and directory where to find the properties.xml. Thumb up! –
You can pilot what to do when installing a Plone add-on by providing an Extension/install.py file with a install method inside:
def install(portal, reinstall=False):
if not reinstall:
setup_tool = portal.portal_setup
setup_tool.runAllImportStepsFromProfile('profile-your.pfile:default')
This way you are driving what Plone should do when installing.
If you need it: the same if for uninstall:
def uninstall(portal, reinstall=False):
if not reinstall:
setup_tool = portal.portal_setup
setup_tool.runAllImportStepsFromProfile('profile-example.gs:uninstall')
This way you can prevent the uninstall step to be run when reinstalling.
Warning: as Mathias suggested using quickinstaller -> reinstall feature is bad.
Warning: this will not probably work anymore on Plone 5 (there's open discussion about this).
I think what you describe is one of the problems upcoming with the increasing complexity of Plone's stack, and one of the reasons, why it is not recommended to execute a re-install anymore, but to provide a profile for each version of the Add-On, via upgrade-steps (as Mathias mentioned). That increases dev-time significantly and results in even more conflicts, of my experience. Here are the refering docs:
http://docs.plone.org/develop/addons/components/genericsetup.html#add-upgrade-step
Elizabeth Leddy once wrote an Add-On to ease that pain and I can confirm it does:
https://github.com/ampsport/amp.ezupgrade
And the great guys from FTW, too, I never used it, but looks promising:
https://pypi.python.org/pypi/ftw.upgrade
Neither used this one, even claims to have some extra goodies, like cleanup broken OFS objects and R. Patterson's on it:
https://github.com/collective/collective.upgrade
As we're here, the first good doc I could find about it ~ 1.5 years ago, comes from Uwosh, of course:
http://www.uwosh.edu/ploneprojects/docs/how-tos/how-to-use-generic-setup-upgrade-steps
Another solution can be, to check, if it's an initial- or re-install, and set the properties programatically via a Python-script, conveniently called 'setuphandlers.py', like described in this answer:
How to check, if my product is already installed, when installing it?
That way one can still trigger re-installs without blowing it all up.
Lastly, a lot of the GS-xml-files understand the purge-property, setting it to False, will not overwrite the whole file, just your given props. This might, or not, apply to your case, you can find samples in the above referenced official doc.
It seems that Xunit no longer supports extending TraitAttributes. They have sealed the class.
There are also some other issues with Autofixture's plugin for AutoData() where we can inject random created data through an attribute. There are a few work around's for this, however I am attempting to evaluate this for a larger product overall. I liked the demo's since they could do small things like SQL, Excel, custom Attributes for category.
It seems there was more functionality before the changes. I have looked at the site and still see some of the features are returning and there isn't much information.
Is there a new set of functionality coming out? Or possibly a change that will allow us to recreate the older functionality in a new way? It seems the SQL and Excel have a work around, however I can't find any information about when the latest version will be compatible with "Autofixture with xUnit.net data theories" Nuget package. I really like what I have seen, though I can say I don't like breaking changes when I look at enterprise solutions. I cringe a little when I think about if I had this in place in an enterprise and I had made a lot of custom attributes, or used Moq and Autofixture to populate and now all my tests were broken. So I guess the other question is, does Xunit seem to change a lot with breaking changes? There is the other option of moving Xunit back a version. Though at some point I would need to know if these things would be fixed or if they were permanently removed, since I wouldn't want to spend time using functionality that is being removed.
Another is AutoFixtureMoqAutoDataAttribute that doesn't load without that side Nuget package. With the side nuget packages not being updated.
I guess the end question may be.. Does anyone know of any plans to get these features to work with the current version of xunit so that I can start implementing and then expect to do mass replaces later? Or are these permanently breaking changes where we shouldn't implement anything that is currently missing.
Thank you in advance.
Short answer
If you want to use xUnit.net 1.x with AutoFixture, use AutoFixture.Xunit.
If you want to use xUnit.net 2.x with AutoFixture, use AutoFixture.Xunit2.
Explanation
xUnit.net 2.0 introduced breaking changes, compared to xUnit.net 1.x (e.g. 1.9.2). For AutoFixture, we wanted to make sure that AutoFixture supports both. There are people who want to upgrade to xUnit.net 2.x as soon as possible, but there are also people who, for various reasons, will need to stay with xUnit.net 1.x for a while longer.
For the people who wanted or needed to stay with xUnit.net 1.x for the time being, we wanted to make sure that they'd still get all the benefits of various bug fixes and new features for the AutoFixture core, so we're maintaining two parallel (but feature complete) Glue Libraries for AutoFixture and xUnit.net.
As an example, we've just released AutoFixture 3.30.3, which addresses a defect in AutoFixture itself. This bug fix thus becomes available for both xUnit.net 1.x and 2.x users.
Thus, when you need to migrate from xUnit.net 1.x to xUnit.net 2.x, you should uninstall AutoFixture.Xunit and instead install AutoFixture.Xunit2. As far as I know, there should be feature parity between the two.
Traits
AutoFixture.Xunit and AutoFixture.Xunit2 don't use the [Trait] attribute, so I don't know exactly what you have in mind regarding this.
AutoMoq
Again, when it comes to AutoFixture.AutoMoq, it doesn't depend on xUnit.net, so I don't understand the question here as well. It sounds like a separate concern, so you may want to consider asking a separate question.
My team now has an SVN + Ankh setup in ASP.NET with trunk, branches, and tags. We switch branches and work on code, but many times there will be inexplicable conflicts in files after simple changes. Why is this? I suspect we simply don't understand enough of how this works. Are there any do's and don'ts, or how should we be approaching our daily changes and commits, without causing conflicts? Is there a basic pecking order of operations to perform to achieve SVN zen? Are we updating before committing or something? Any help is greatly appreciated.
Always update before commit. If you really work with branches don't use switch or only if you really undstand the switch command and how it works otherwise checkout a branch into a fresh working copy in other words create a new one.
Always branch, merge on the solution element, make sure you're fully up to date before merging (ankhsvn will warn about this), also make sure you have no modified files before merging.
Read up on svnbook for when to use normal merging and when to use reintegrate.
Finally, if a conflict does occur, make sure you have a good 3way merge tool to solve the conflict. AnkhSVN recognizes a lot of them automatically, but I really like source gear diffmerge