I am new to Plone and trying to learn how to setup and maintain a server. I realize I need to develop a schedule for packing the data. Right now I am just trying to test this using the pack function in the Zope control panel and also the command line (bin/zeopack).
I know in practice I should leave a week's worth of history, but if I pack to 0 days shouldn't I see all edit history disappear? I am not seeing this happen. What am I doing wrong?
You may be confusing the "undo" history with the version history. Packing the database gets rid of old, unused data. That eliminates your ability to undo older transactions.
Version history is not the same. Version history is not considered unused data, and is not eliminated in packs.
If you don't want edit history, turn off versioning.
Related
Hi guys! I'm new to git so this is something that keeps concerning me till now ...
As you guys already knew, below is the most popular git flow. But to me sometimes, I did have some problem with creating new feature branch from develop branch. Because if someone before me, who already committed and merged something bad into develop branch and after that, without knowing I create a new branch base on it, then I will work on a branch with potentially broken, right ?
Then in my new team, I saw a git flow like this:
Every feature branch is created from master
When a feature is complete it is merged into the develop branch (testing env)
If tester sees there's no problem with develop branch then:
A release branch is created from master
Merge all the completed feature into the release branch. (and test 1 more time on STG env)
No need to merge back release branch into develop branch. If there's any problem, fix it on feature branch and merge it into develop branch (test again) and if it's ok, merge it into release branch.
Merge release branch into master.
Doing it this way ensures that every time a new feature branch creates, it have already gone through 2 phase of testing or so I think...
Please give me advice on this one, is it good or not ? Or is there any disadvantage that I dont know. My new team has been working on this git flow for few years and there's no problem until now. But when I suggest it to my friend, they don't like the idea saying that I should have followed the popular one... I'm kinda confused right now. Thank you very much.
First of all, git-flow is just a framework. There is no reason why you should not adjust it to your needs.
But there are some disadvantages in creating branches that way:
(1) Your testers might (and most certainly will at some point) still miss a bug which then will end up on master. We are all humans and it's simply impossible - no matter how many testers and test phases you have - to produce software that is completely bug-free. master is just your new develop.
(2) The more important issue here: you always have to wait until new features are on master to continue working on them. Often bigger features are split into multiple issue / user stories / whatever to make them easier to work on. You would normally merge part 1 into develop (considering it's a shippable feature in itself) and then continue working on this using develop as your base.
This is going to be hard (meaning delayed using your master-base method).
Hi we are at a point in our wordpress website that it would seem appropriate to update the WP version. We have an existing busy site, have paid a good amount to hack plugins and core stuff to get it working the way we want it to. I'm debating the wisdom of updating the entire foundation of the site over a few minor vulnerabilities and enhancements that aren't interesting for us. My thinking is this.
The reason for upgrading appears to be because a given version may have some security issues. So you go through the painful process of updating, which usually kills all your plugins. You then spend many frustrating hours talking to plugin support people telling you 'it must be clashing with other plugins', or you take the time to / pay someone to fix everything (again).
After upgrading you have to take time to relearn the system and all the changes. You may have to adjust your workflow due to these changes, and maybe retrain your entire team. And after all that, in 5 minutes hackers find the security issues with the current version - which MAY be worse than your previous version - and you have to go through the whole operation again.
The aim of running a website is to not spend each and every day dealing with upgrade issues. The aim is to have a system that does what you need it to do, the way you want to do it. Once you have that, it's not useful to keep changing it 'just because', it's not a fashion show. It's not an iPhone, pushing users to upgrade their entire phone because they added a letter on the end of the phone name and changed the color from grey to a slightly different grey.
I am of the opinion that it is much more economical - once you have a system set up the way you need it - to just get a dev to fix any vulnerabilities in your existing WP code. And, somewhere down the line, if there was a VERY good reason you should update (e.g. major new PHP version) - then build the site from scratch and get a dev to migrate the data. This could be 7 years later or more. The time effort and money you would save doing it this way, seems a lot more logical.
Say you were building a Ford Model T in your garage. You are ordering parts from a supplier and you are halfway through the build when your supplier starts sending you parts from a Ford Capri - "Oh we are doing parts for Capris now". So you can the Model T and start building a Capri. Halfway through the build, your supplier starts sending you parts for a Mustang. And so on. You will spend your entire life half building that car and at no point end up driving the C*.
Given the performance issues of WP out the box - and the logical progression of a successful site to start migrating to a bespoke solution - it would make sense to me to take your existing WP, strip out the crap you don't need and optimize everything. At that point, it's not really WP any more, vulnerabilities fixed, and it is essentially already a bespoke solution without needing to start from the absolute ground up.
Does anyone have any thoughts on this? Serious question, we need to decide if we are going to go through all this a 4th time and I'm not really feeling it. Any input would be appreciated thanks. To make this 'it must be a specific question' I will just ask - are you a very experience WP dev, and do YOU keep jumping through the update hoop every 5 mins?
Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584
My package introduces registry entries. Changes by site administrator should not be overwritten on reinstall of the package.
Many ways to Rome. I chose ftw.upgrade. I like the declarative way of the upgrade step syntax. Its possible to use an upgrade directory for generic setup xml-Files like propertiestool.xml. No need to define handler python code. The upgrade works well. The admin can upgrade from control panel and in my case the new property is added. Insomma: For a new property just these have to be added: an upgrade-step declaration for source and destination version and directory where to find the properties.xml. Thumb up! –
You can pilot what to do when installing a Plone add-on by providing an Extension/install.py file with a install method inside:
def install(portal, reinstall=False):
if not reinstall:
setup_tool = portal.portal_setup
setup_tool.runAllImportStepsFromProfile('profile-your.pfile:default')
This way you are driving what Plone should do when installing.
If you need it: the same if for uninstall:
def uninstall(portal, reinstall=False):
if not reinstall:
setup_tool = portal.portal_setup
setup_tool.runAllImportStepsFromProfile('profile-example.gs:uninstall')
This way you can prevent the uninstall step to be run when reinstalling.
Warning: as Mathias suggested using quickinstaller -> reinstall feature is bad.
Warning: this will not probably work anymore on Plone 5 (there's open discussion about this).
I think what you describe is one of the problems upcoming with the increasing complexity of Plone's stack, and one of the reasons, why it is not recommended to execute a re-install anymore, but to provide a profile for each version of the Add-On, via upgrade-steps (as Mathias mentioned). That increases dev-time significantly and results in even more conflicts, of my experience. Here are the refering docs:
http://docs.plone.org/develop/addons/components/genericsetup.html#add-upgrade-step
Elizabeth Leddy once wrote an Add-On to ease that pain and I can confirm it does:
https://github.com/ampsport/amp.ezupgrade
And the great guys from FTW, too, I never used it, but looks promising:
https://pypi.python.org/pypi/ftw.upgrade
Neither used this one, even claims to have some extra goodies, like cleanup broken OFS objects and R. Patterson's on it:
https://github.com/collective/collective.upgrade
As we're here, the first good doc I could find about it ~ 1.5 years ago, comes from Uwosh, of course:
http://www.uwosh.edu/ploneprojects/docs/how-tos/how-to-use-generic-setup-upgrade-steps
Another solution can be, to check, if it's an initial- or re-install, and set the properties programatically via a Python-script, conveniently called 'setuphandlers.py', like described in this answer:
How to check, if my product is already installed, when installing it?
That way one can still trigger re-installs without blowing it all up.
Lastly, a lot of the GS-xml-files understand the purge-property, setting it to False, will not overwrite the whole file, just your given props. This might, or not, apply to your case, you can find samples in the above referenced official doc.
My team now has an SVN + Ankh setup in ASP.NET with trunk, branches, and tags. We switch branches and work on code, but many times there will be inexplicable conflicts in files after simple changes. Why is this? I suspect we simply don't understand enough of how this works. Are there any do's and don'ts, or how should we be approaching our daily changes and commits, without causing conflicts? Is there a basic pecking order of operations to perform to achieve SVN zen? Are we updating before committing or something? Any help is greatly appreciated.
Always update before commit. If you really work with branches don't use switch or only if you really undstand the switch command and how it works otherwise checkout a branch into a fresh working copy in other words create a new one.
Always branch, merge on the solution element, make sure you're fully up to date before merging (ankhsvn will warn about this), also make sure you have no modified files before merging.
Read up on svnbook for when to use normal merging and when to use reintegrate.
Finally, if a conflict does occur, make sure you have a good 3way merge tool to solve the conflict. AnkhSVN recognizes a lot of them automatically, but I really like source gear diffmerge