Updating Wordpress - 'Philosophical' Question, do you upgrade, or...? - wordpress

Hi we are at a point in our wordpress website that it would seem appropriate to update the WP version. We have an existing busy site, have paid a good amount to hack plugins and core stuff to get it working the way we want it to. I'm debating the wisdom of updating the entire foundation of the site over a few minor vulnerabilities and enhancements that aren't interesting for us. My thinking is this.
The reason for upgrading appears to be because a given version may have some security issues. So you go through the painful process of updating, which usually kills all your plugins. You then spend many frustrating hours talking to plugin support people telling you 'it must be clashing with other plugins', or you take the time to / pay someone to fix everything (again).
After upgrading you have to take time to relearn the system and all the changes. You may have to adjust your workflow due to these changes, and maybe retrain your entire team. And after all that, in 5 minutes hackers find the security issues with the current version - which MAY be worse than your previous version - and you have to go through the whole operation again.
The aim of running a website is to not spend each and every day dealing with upgrade issues. The aim is to have a system that does what you need it to do, the way you want to do it. Once you have that, it's not useful to keep changing it 'just because', it's not a fashion show. It's not an iPhone, pushing users to upgrade their entire phone because they added a letter on the end of the phone name and changed the color from grey to a slightly different grey.
I am of the opinion that it is much more economical - once you have a system set up the way you need it - to just get a dev to fix any vulnerabilities in your existing WP code. And, somewhere down the line, if there was a VERY good reason you should update (e.g. major new PHP version) - then build the site from scratch and get a dev to migrate the data. This could be 7 years later or more. The time effort and money you would save doing it this way, seems a lot more logical.
Say you were building a Ford Model T in your garage. You are ordering parts from a supplier and you are halfway through the build when your supplier starts sending you parts from a Ford Capri - "Oh we are doing parts for Capris now". So you can the Model T and start building a Capri. Halfway through the build, your supplier starts sending you parts for a Mustang. And so on. You will spend your entire life half building that car and at no point end up driving the C*.
Given the performance issues of WP out the box - and the logical progression of a successful site to start migrating to a bespoke solution - it would make sense to me to take your existing WP, strip out the crap you don't need and optimize everything. At that point, it's not really WP any more, vulnerabilities fixed, and it is essentially already a bespoke solution without needing to start from the absolute ground up.
Does anyone have any thoughts on this? Serious question, we need to decide if we are going to go through all this a 4th time and I'm not really feeling it. Any input would be appreciated thanks. To make this 'it must be a specific question' I will just ask - are you a very experience WP dev, and do YOU keep jumping through the update hoop every 5 mins?

Related

Creating incremental df file from 2 different df files

Is there a method to create an incremental .df file from 2 different .df files? Or do I have to load both files into 2 blank databases and then use the create incremental .df file feature from Data Administration tool?
I'm using Openedge 10.2B08
A Data Definition (.df) file is a listing of things to add, drop or update in a database. It is in plain text so you can view it in a text editor. You can cut-and-paste the contents of one .df file into another. However, you may run into problems if the changes from the two files conflict. For example, file 1 may say to drop field xyz, while file 2 says to update field xyz. This will cause an error and the entire .df will be backed out.
If you're sure there are no conflicts, just paste the contents of file 2 into file 1, just above the footer. The footer is the last five lines in the file:
.
PSC
cpstream=ISO8859-1
.
0000000610
The very last line is a character count. You may have trouble loading the new .df if you don't update that to match the new file length. And be sure to test the .df before trying it in production.
Loading them both in a blank database then dumping one solid DF remains the best solutions in my opinion.
Of course you could shaves a couple of minutes by appending one file to the other, I think you can even remove the footer and not bother, it should work.
As with everything it depends on the critical aspect of the situation. Are you looking at important downtime on a production database? Usually there shouldn't be much compromise on whatever will be applied to production. A solid DF is better than a "hacked 99.9% safe" one. That's the difference between a good and bad Dba. The good one may seems to work a bit slower over a decade. But once in that decade the bad one will eventually provoke some critical downtime to a business...totally offsetting the silly productivity advantage he may looks like to have.
I fixed countless mistakes all around for the past 15 years, I made one. It's not a fun feeling. Being waked up early Monday by a panicked helpdesk guy that describes the issue. Quickly realising it's related to the previous night maintenance I made. Replying to get all users out, country wide going to an halt while I'm trying to figure what's wrong and how to fix it. 2500 employees were being paid but wasn't able to work... With customers in front of them with money to spend and no time to lose.
Took me 3 hours. It wasn't a lazy mistake... Just a mistake with no easy way to notice while doing the usual post-update quick tests to make sure it runs fine. We had Gui code running code against a training database while the usual business logic was being executed on Unix and production.
Don't need a math genius to compute that a silly DBA mistake was costing more than his annual salary every few minutes.
Mistakes happens but folks, if a few minutes of added work is the root cause of one, time to leave the seat to someone else I'm afraid. No shame there, it's simply a job that requires that mindset and some people needs decades to eventually get caught off-guard and realize it. Nothing is very fun about spending precious time triple checking a 99.9% safe update...nobody will notice the added effort next Monday as it will work regardless as usual. Everybody will yell if it's not working, your fault or not.
My mutli-million mistake never ever got mentioned once it was solved. Everyone knew very well that I can count lost cash and that I've never cut corners for any reason during my whole career. It's still only money... I could now go with the mistake that almost killed 2 youngs workers lifes a few years later.
Stress and way too many hours, it can le
ad to a reflex F1 and kill mechanics working in some automation device.
Stay safe with that keyboard guys, it's serious business ;)

Packing Plone data

I am new to Plone and trying to learn how to setup and maintain a server. I realize I need to develop a schedule for packing the data. Right now I am just trying to test this using the pack function in the Zope control panel and also the command line (bin/zeopack).
I know in practice I should leave a week's worth of history, but if I pack to 0 days shouldn't I see all edit history disappear? I am not seeing this happen. What am I doing wrong?
You may be confusing the "undo" history with the version history. Packing the database gets rid of old, unused data. That eliminates your ability to undo older transactions.
Version history is not the same. Version history is not considered unused data, and is not eliminated in packs.
If you don't want edit history, turn off versioning.

How to share code with continuous integration

I've just started working in a continuous integration environment (TeamCity). I understand the basic idea of not getting so abstracted out in your code that you are never able to build it to test functionality, etc. However, when there is deep coding going on, occasionally it will take me several days to get buildable code--but in the interim other team members may need to see my code.
If I check the code in, it breaks the build. However, if I don't check it in, my team members are unable to see the most recent work. I'm wondering how this situation is best dealt with.
A tool like Code Collaborator (Google link, smartbear.com is down..) would allow your peers to see your code, without you committing it. Instead, you just submit it for review.
It's a little extra trouble for them to run it though.
Alternatively, setup a second branch/fork of your codebase for you to work in, your peers can sync to that, and it won't break the build server. When you're done working in your own branch, you can merge it back with mainline/trunk/whatever.
In a team environment, it is usually highly undesirable for anybody to be in an unbuildable state for days. I try to break large code deliveries to as many buildable check-ins as I can. At minimum, create and check in your interfaces even if you do not have the implementation ready so others can start to code against them.
One of the primary benefits of Continuous Integration is that it shows you when things break and when things are fixed. If you commit those pieces of code that break the system, other developers will be forced to get it into a working state before continuing the development. This is a good thing because it doesn't allow code changes to be made on top of broken things (which could cause issues where a co-workers code worked on the broken system, but doesn't work once the initial break is fixed).
This is also a prime example of a good time to use branches/forks, and simply merge to the trunk when all the broken things are fixed.
I am in exactly the same situation here.. As build engineer I have this working beautifully.
First of all, let me break down the branches / projects. #Dolph Mathews has already mentioned branching and tbh, that is an essential part of getting your setup to work.
Take the main code base and integrate it into several personal or "smaller" team branches. i.e. branch_team_a, branch_team_b, branch_team_c
Then set up teamcity to build against these branches under different project headings. So you will eventually have the following: Project Main, Project Team A, Project Team B, Project Team C
Thirdly, then setup developer checkins so that they run pre-commits builds for the broken down branches.. You can find the TC plugin for this under tools and settings.. They have it for IntelliJ or VS.
You now have your 3-tier setup..
- Developer kick starts a remote-run pre-commit build from their desktop against their project. If it passes, it get's checked into the repository i.e. branch_team_a
- Project Team A passes after several check-ins; at which point you integrate your changes from branch_team_A to main branch
- Project Main builds!
If all is successful then you have a candidate release.. If one part fails, projects a, b or c. it doesn't get checked into main. This has been my tried and tested method and works everytime. It also vastly improves team communication.

"Selling" trac/buildbot/etc to upper management

My team works mostly w/ Flex-based applications. That being said, there are nearly no conventions at all (even getting them to refactor is a miracle in itself) and the like.
Coming from a .NET + CruiseControl.NET background, I've been aching to getting everyone to use some decent tracking software (we're using a todo list coded in PHP now) and CI; I figured trac+BuildBot would be a nice option.
How would you convince upper management that this is the way to go, as well as some of the rules mentioned in this post? One of my main issues is that everyone codes without thinking (You'd be amazed at the type of "logic" this spawns...)
Thanks
Is there anything you could do now that wouldn't require permission from anyone else? Could you start by just using trac/buildbot/etc for just your own work, then add in others as they are interested?
In my experience you can get quite far by doing w/out asking.
Tell the management that they'll be better able to keep their eye on progress with such a tool.
Are there specific benefits to the route that you're suggesting that you could show them without them having to buy in?
I had an experience with getting my team to accept a maven + cruisecontrol CI setup. Basically I tried to get them to go along with it for a few days and they kept balking because it was unfamiliar. Then I just did it on my own and had all broken builds emailed to the mailing list. That night the project lead made a check in that broke the build (he just forgot a file) and, of course, everybody was emailed with his screw up.
The next day he came over to me and said, "I get it now."
It required no effort from him to get involved and got to see the benefits for free.

Are there any *FREE*, open source .NET shopping carts that allow bulk importing?

I have reviewed DashCommerce, nopCommerce and DotShoppingCart for possible use and all of them seem to not allow any way to do bulk product/category/manufacturer/etc imports from existing data (DotShoppingCart seems to allow it only in the paid version).
The company I work for has some 30,000 products that we would need to load, and at least a thousand categories or so. Obviously this is ridiculous to have to manually type in, and as I've stated before in previous questions the company is insanely cheap and won't pay for software, so I need a free solution.
I don't have the time to roll my own solution by following the ASP.NET MVC Storefront series, or else I would just do that; My boss seems to think creating an online store is trivially simple (I had slapped together a Classic ASP site a few months back but we recently changed our primary vendor so I can't use most of it; it was pretty much hacked together anyway and I can't really use it without reworking a lot of it for the new supplier) and I don't want to hear him if/when I tell him I need a couple of months; he's already waited 90 days since he has some SEO expert on retainer to start blogging/marketing it and doesn't understand that writing software takes time, it's not something that can be thrown together in a week or even a month.
Is there anything out there that can meet these requirements? In a pinch I guess I could install DashCommerce or something and interrogate the database schema it creates to force an import myself to give him a quick solution that he seems to want.
It would take about 20 minutes to write a simple app that would connect to the database and insert the rows. All you need is a loop that reads a row, writes a row...

Resources