How to disable indexing in plone? - plone

The data.fs is behaving weird and its size is getting increased everyday by 5-6% without much addition of content. I want to stop indexing.
I tried removing indexes from the portal_catalog but site started giving error.
Can anybody suggest how to stop indexing so that my disk space is not filled so fast.

collective.noindexing might help. I'd try it on a copy of production.

Related

Advice with optimising the post_meta in my phpmyadmin file

I am trying to optimize my site's database using PHPMyAdmin.
I want to know if it is safe to delete "post_meta" as it has occupied more than 1 GB of my database?
update: I removed all the post revisions, spam comments etc, along with all the plugin data using https://wordpress.org/plugins/plugins-garbage-collector/. However, i still see my postmeta DB contains 1GB. When i look inside, i can still see a lot of old plugin files. I followed this command: https://crunchify.com/better-optimize-wordpress-database.../. to remove some of the files. However, i am confused if this is the right and the shortest way to fix this issue. Or there is any other way to clean the old plugin data from phpmyadmin?
First, you should know that MySQL usually expands the size of its files on disk as needed, but doesn't shrink them. Deleting old data won't reclaim that disk space automatically. You can supposedly use the SQL command OPTIMIZE TABLE <table_name>; to do that manually, but you'll have to do it per table. I say "supposedly" because I've seen differing reports on whether this works to reclaim all the disk space from deleted rows, or if it only reclaims disk space from outdated indexes.
Next, I would not delete an entire table. It's almost certain that WordPress will stop functioning properly if it can't access the post_meta table. Even removing individual entries risks corrupting your data in strange ways, where one post may link to another but if it's missing, WordPress will show an error because of the missing post, things like that.
Finally, to specifically address your last question "[I]s [there] any other way to clean the old plugin data from phpmyadmin?" I highly suggest using WordPress-aware tools for this job. You can delete data from your database directly using phpMyAdmin (by either structuring a query properly or just going through and using the delete icon to remove a row you don't want), but not in a way that will be aware of how WordPress processes the data and relationships between tables. There are probably tools meant specifically for that job that would make things easier for you.

R knitr: is it possible to use cached results across different machines?

Issue solved, see answers for details.
I would like to run some code (with knitr) on a more powerful server and then maybe have the possibility of making small changes on my own laptop. Even copying across the entire folder, it seems that the cache is rebuilt when re-compiling locally, is there a way to avoid that and actually use the results in the cache?
Update: the problem arose from different versions of knitr on different machines.
In theory, yes -- if you do not change anything, the cache will be kept. In practice, you have to check carefully what the "small changes" are. The documentation page for cache has explained when the cache will be rebuilt, and you need to check if all three conditions are met.
I wonder if in addition to #Yihui's answer if the process of copying from one machine to another changes the datetimes on the files so that they look out of date even when nothing has changed.
Look at the dates on the files involved after copying. If you can figure out which files need to be newer than others then touching them may prevent the rebuilding.
Another option would be to just paste in the chached pieces directly so that they are not rerun (though that means you have to rerun and repaste manually if you change anything in those parts).

Subversion: "svn update" loses CSS data

Recently, I've noticed strange behavior by Subversion. Occasionally, and seemingly randomly, the "svn up" command will wreak havoc on my CSS files. 99% of the time it works fine, but when it goes bad, it's pretty damn terrible.
Instead of noting a conflict as it should, Subversion appears to be trashing all incoming conflict lines and reporting a successful merge. This results in massively inconvenient manual merges because the incoming changes effectively disappear unless they're manually placed back into the file.
I would have believed this was a case of user error, but I just watched it happen. We have two designers that frequently work on the same CSS files, but both are familiar and proficient with conflict resolution.
As near as can figure, this happens when both designers have a large number of changes to check in and one beats the other to the punch. Is it possible that this is somehow confusing SVN's merging algorithm?
Any experience or helpful anecdotes dealing with this type of behavior from SVN are welcome.
If you can find a diff/merge program that's better at detecting the minimal changes in files of this structure, use the -diff-cmd option to svn update to invoke it.
It may be tedious but you can check the changes in the CSS file by using
svn diff -r 100:101 filename/url
for example and stepping back from your HEAD revision. This should show what changes were made , at what revision and by whom. It sounds like a merging issue I've had before but unfortunately I found myself resolving it by looking at previous revisions and merging them manually too.

Cleaning up the attic in Plone

I have some old site with a Data.fs which has travelled thru aeons and accumulated cruft enough to be comparable with a yard of an average used cars dealer.
Even after removing manually folders and packing the database the Data.fs seems to take too much space.
What would be a process to hunt down and reclaim this "lost space" in Data.fs? Like printing out the object tree and relative sizes of the folders (recursively).
See ZODB/scripts/netspace.py or enfold.recipe.zodbscripts. There are ways to get netspace installed into your buildout with all the right path info setup. Model after this but use netspace=ZODB.scripts.netspace:Main instead of migrateblobs=ZODB.scripts.migrateblobs:main.
This doesn't help track it down, but you could try:
Mirroring the configuration, but with a clean Data.fs;
Exporting (.zexp) from the live site;
Import into the clean;
If it all goes well, switch to the new DB.
You can also take a look at these links:
Update the database
Inspect the database with eye
Another tip to look at the database size
The eye one looks promising.
And don't forget that no one expects the Plone Inquisition:
http://pypi.python.org/pypi/mr.inquisition

CodeRush' XPress solution cache getting a bit big (4.5Gb)

...any ideas how to stop it from growing?
Our IT services has placed a cap on profile size anmd now we're getting annoying audit messages.
Normally I'd blame IT servcies for their 'One size fits all, treating developers like they were drones from sector 7G' attitude but 4.5GB is a bit on the big size.
Give how clever those chaps at DevExpress are I can't believe they've implemented this caching without a setting to keep it from growing so large.
Have we missed something?
In all honesty I think you'd be better off asking DevExpress directly.
Try the Support Center ( http://www.devexpress.com/sc/ )
Or email them direct support#devexpress.com
That said... even if the cache isn't self regulating, you should be able to blow away the SolutionCache and AssemblyCache folders without any major issues... The caches in question will be rebuild as needed.

Resources