I have some old site with a Data.fs which has travelled thru aeons and accumulated cruft enough to be comparable with a yard of an average used cars dealer.
Even after removing manually folders and packing the database the Data.fs seems to take too much space.
What would be a process to hunt down and reclaim this "lost space" in Data.fs? Like printing out the object tree and relative sizes of the folders (recursively).
See ZODB/scripts/netspace.py or enfold.recipe.zodbscripts. There are ways to get netspace installed into your buildout with all the right path info setup. Model after this but use netspace=ZODB.scripts.netspace:Main instead of migrateblobs=ZODB.scripts.migrateblobs:main.
This doesn't help track it down, but you could try:
Mirroring the configuration, but with a clean Data.fs;
Exporting (.zexp) from the live site;
Import into the clean;
If it all goes well, switch to the new DB.
You can also take a look at these links:
Update the database
Inspect the database with eye
Another tip to look at the database size
The eye one looks promising.
And don't forget that no one expects the Plone Inquisition:
http://pypi.python.org/pypi/mr.inquisition
Related
I am trying to optimize my site's database using PHPMyAdmin.
I want to know if it is safe to delete "post_meta" as it has occupied more than 1 GB of my database?
update: I removed all the post revisions, spam comments etc, along with all the plugin data using https://wordpress.org/plugins/plugins-garbage-collector/. However, i still see my postmeta DB contains 1GB. When i look inside, i can still see a lot of old plugin files. I followed this command: https://crunchify.com/better-optimize-wordpress-database.../. to remove some of the files. However, i am confused if this is the right and the shortest way to fix this issue. Or there is any other way to clean the old plugin data from phpmyadmin?
First, you should know that MySQL usually expands the size of its files on disk as needed, but doesn't shrink them. Deleting old data won't reclaim that disk space automatically. You can supposedly use the SQL command OPTIMIZE TABLE <table_name>; to do that manually, but you'll have to do it per table. I say "supposedly" because I've seen differing reports on whether this works to reclaim all the disk space from deleted rows, or if it only reclaims disk space from outdated indexes.
Next, I would not delete an entire table. It's almost certain that WordPress will stop functioning properly if it can't access the post_meta table. Even removing individual entries risks corrupting your data in strange ways, where one post may link to another but if it's missing, WordPress will show an error because of the missing post, things like that.
Finally, to specifically address your last question "[I]s [there] any other way to clean the old plugin data from phpmyadmin?" I highly suggest using WordPress-aware tools for this job. You can delete data from your database directly using phpMyAdmin (by either structuring a query properly or just going through and using the delete icon to remove a row you don't want), but not in a way that will be aware of how WordPress processes the data and relationships between tables. There are probably tools meant specifically for that job that would make things easier for you.
Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584
Recently, I've noticed strange behavior by Subversion. Occasionally, and seemingly randomly, the "svn up" command will wreak havoc on my CSS files. 99% of the time it works fine, but when it goes bad, it's pretty damn terrible.
Instead of noting a conflict as it should, Subversion appears to be trashing all incoming conflict lines and reporting a successful merge. This results in massively inconvenient manual merges because the incoming changes effectively disappear unless they're manually placed back into the file.
I would have believed this was a case of user error, but I just watched it happen. We have two designers that frequently work on the same CSS files, but both are familiar and proficient with conflict resolution.
As near as can figure, this happens when both designers have a large number of changes to check in and one beats the other to the punch. Is it possible that this is somehow confusing SVN's merging algorithm?
Any experience or helpful anecdotes dealing with this type of behavior from SVN are welcome.
If you can find a diff/merge program that's better at detecting the minimal changes in files of this structure, use the -diff-cmd option to svn update to invoke it.
It may be tedious but you can check the changes in the CSS file by using
svn diff -r 100:101 filename/url
for example and stepping back from your HEAD revision. This should show what changes were made , at what revision and by whom. It sounds like a merging issue I've had before but unfortunately I found myself resolving it by looking at previous revisions and merging them manually too.
I'm planning to create a site for learning technologies, such as codeproject or codeplex. Can you please suggest to me the different ways to manage huge articles?
Look at a content management system, such as SiteFinity: http://www.sitefinity.com/. There are others, some free. You can find some on codeplex.com.
HTH.
Check out DotNetNuke CMS too >> http://www.dotnetnuke.com/
And here's a very hot list available of ASP.NET CMS systems:
http://en.wikipedia.org/wiki/List_of_content_management_systems#Microsoft_ASP.NET_2
Different ways to manage articles while building the entire system yourself. Hmm, ok, let me give it a try... here's the short version.
There are several ways you can "store" your articles (content, data, whatever), and the best way to do so is to use a Database. (SQL Server, MySQL, SQLCE, SQLite, Oracle, the list goes on).
If you're not sold on the idea of a database, you can use any other type of persistent storage that you like. IE: XML, or even flat "TXT" files.
Since you're using ASP.NET you now need to either write your code behind, or some other complied code to access your stored data. You pull it out of the storage and display it on the page/view.
Last but not least, I'd like to give you a suggestion (even though it's not part of your original question). As the other answerers have stated, you should look at a pre-built CMS. If nothing else, to see how it's done (not necessarily to use it as is). My philosophy is quite simple, if you want to be productive in your development, don't bother reinventing the wheel just for the sake of it. If someone else has already build and given away exactly what you need, you should at very least give it a look and use what you can. It will save you piles of time and heartache.
Your question is not vague enough to be closed, but is vague enough that answering all of the nuances could take several thousand lines.
I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
The correct answer is ... "it depends".
It depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.
I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)
I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.
To answer your question I don't believe that there's such a standard as WikiML as Till called it.
As strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.
Given the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.
I haven't heard of WikiML.
I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.
And if the contents are not databased, it's gonna be a royal pain in the ass. :D
Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.