Articles in Plone go through an author-editor-publisher process. Why are feeds the exception? Is there a way to do this with feeds/news through the adding a news item process?
Feeds follow the same pattern; they will pick up on whatever criteria you have specified in e.g. a Collection. If you only show things that are "Published", that will also be the case in the associated feed.
If you're asking for a separate workflow/approval process for the feed itself, that's not how it works. (And I have problems seeing what kind of problem it would solve outside of adding more complexity and more process :)
Related
I need to implement a central search for multiple plone sites on different servers/machines.If there is a way to select which sites to search would be a plus but not the primary concern.Few ways I came upon to go about this:
-Export the ZCatalog indexes to an XML file and use a crawler periodically to get all the XML files so a search can be done on them,but this way does not allow for live searching.
-There is a way to use a common catalog but its not optimal and cannot be implemented on the sites i am working on because of some requirements.
-I read somewhere that they used solr but i need help on how to use it.
But I need a way to use the existing ZCatalog and index and not create another index as i think is the case with using solr due to the extra overheads and the extra index required to be maintained.But will use it if no other solution possible.I am a beginner at searching so please give details as much as possible.
You should really look into collective.solr:
https://pypi.python.org/pypi/collective.solr/4.1.0
Searching multiple sites is a complex use case and you most likely need a solution that scales. In the end it will require far less effort to go with Solr instead of coming up with your own solution. Solr is build for these kind of requirements.
As an alternative, you can also use collective.elasticindex, an extension to index Plone content into ElasticSearch, for this.
According to its documentation:
This doesn’t replace the Plone catalog with ElasticSearch, nor
interact with the Plone catalog at all, it merely index content inside
ElasticSearch when it is modified or published.
In addition to this, it provides a simple search page called
search.html that queries ElasticSearch using Javascript (so Plone is
not involved in searching) and propose the same features than the
default Plone search page. A search portlet let you redirect people to
this new search page as well.
That can be and advantage over collective.solr.
Is there a way to mass delete publications rather than delete them from the Content Manager? I need to get rid of about 75 pubs which are now surplus.
Whilst it may be possible to manipulate the database directly, the only supported ways to delete publications is through the Content Manager or an API (although quickly looking at the documentation I think it's only possible through the older TOM API, not TOM.Net).
As Nuno suggests, for 75 publications, it will likely be far easier to do it through the Content Manager rather than write/test/debug a tool that uses the API to do the same job.
Remember that you can only delete publications as long as:
No content in the Publication is published.
The Publication does not have any Child Publications in a BluePrint.
You are a system administrator.
Simplest way is using the Core Service API I would say, just call client.Delete("tcm:0-xyz-1"); creating your Core Service Client as described on tridion-practice for example.
However you will most likely get an Item is in use. error back which you probably best can resolve manually in the UI. Unpublishing the entire publication as a preparation before calling Delete is also possible using the client.UnPublish() method (see API documentation for details about the parameters required).
A lot will depend on which publications you need to get rid of. It's easy enough to delete publications from a script. (My favourite approach is using Windows Powershell), but you'll need to delete the blueprint children first, before attempting to delete their parents. If a publication has a blueprinting child, you can't delete it.
So first figure out the blueprinting relationships, and then do the deleting. Still, for 75 publications, you would probably be finished doing it by hand before you had your script tested. Of course, if you need to transmit the same changes accurately through your DTAP street, a script is the way to go.
This is bit time taking process I deleted more than 50 last year with thousand of published item in those publication.
FYI there is quicker way to set all item as unpublished using power tool but again that will left many entries in broker db.
So it is advisable to plan this and do proper un-publishing and delete from content manager, tom API or core service.
So I have played with the idea of making a specialized RSS-reader for some time now, but I have never gotten around to it. I have several project that could benefit from reading feeds in one way or another.
One project for this is an RSS-bot for an IRC-channel I'm on. But I havent quite wrapped my mind around how I can "mark as read" a story, so that it doesn't spit out all the stories in the feed everytime it runs.
Now, I haven't read the specs extencively yet either, so there might be some kind of unique ID I could use to mark the entry as read using a database of some kind. But is this the right way to do it?
From reading the specs for RSS 2.0 at ttp://cyber.law.harvard.edu/rss/rss.html#hrelementsOfLtitemgt it seems each item has a GUID which you can use to know which articles have been read or not.
We have a content-type built using CCK. One of the fields is a node reference. The node picker is using a view to build the options.
A few days ago, everything was working well.
Today, it looks like all node reference fields using views to populate the selection options are displaying the wrong label. Every single label in the option is ``A'', but the actual node number is correct. The form actually works, just the labels are incorrect.
We have tried just about every combination of edit/save, disable/enable, reboot, clear cache, clone the view, rebuild the view, new view, etc, but we still have a big list of As.
If we create a brand new content type with a brand new node reference field, we get the problem.
Through some backup/restore exercises, we have determined that the problem is actually in the database and not in the code.
We can restore our last good backup, but we will lose a decent amount of work we have put into other parts of the database.
We enabled mysql query logging, and the view is actually being called properly, but we cannot track down where the problem is creeping in after that (unraveling the CCK / Views / Drupal plumbing is a challenge).
The install was build with latest stable versions as of April.
The problems referred to in http://drupal.org/node/624422 is similar, but our code versions include the patches mentioned.
Any ideas would be appreciated. Thanks.
I had a similar problem with using views for node reference, after quite a lot of hair pulling it turned out to be that my caching layer was buggy. I was using memcached, but memcached wasn't truned on on the server. It may be worth checking.
Thanks for the responses. We finally got to the bottom of this.
There was a module that was doing a custom hook_views_post_render() that did a prep_replace to rewrite some output. Unknown to us, there are instances where the $output parameter isn't a string, but an array, and this was causing the problem. One of those instances happens to be when you attach a view to a build a select in CCK.
I can see where to get an rss feed for the BUG LIST, however I would like to get rss updates for modifications to current bugs if possible.
This is quite high up when searching via Google for it, so I'm adding a bit of advertisement here:
As Bugzilla still doesn't support this I wrote a small web service supporting exactly this. You can find its source code here and a running instance here.
What you're asking for is the subject of this enhancement bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=256718
but no one seems to be working on it.
My first guess is that the way to do it is to add a template somewhere like template/en/default/bug/show.atom.tmpl with whatever you need. Put it in custom or an extension as needed.
If you're interested in working on it or helping someone with it, visit channel #mozwebtools on irc.mozilla.org.
Not a perfect solution, but with the resolution of bug #255606, Bugzilla now allows listing all bugs, by running a search with no criteria, and you can then get the results of the search in Atom format using the link in the bottom of the list.
From the release notes for 4.2:
Configuration: A new parameter search_allow_no_criteria has been added (default: on) which allows admins to forbid queries with no criteria. This is particularly useful for large installations with several tens of thousands bugs where returning all bugs doesn't make sense and would have a performance impact on the database.