This is a repeat of a question in the (restricted) Tridion Forum about the inability to delete a structure group. However, since it didn't get a proper answer or solution by the person reporting the question I am re-asking it here.
I am stuck with a structure group, which I can't delete either. It is not localized, only blueprinted to one other Publication and does not have any pages in it. The contents have been migrated from a presentation environment, perhaps an old target stuck somewhere?
Deleting it directly in the database is not an option. Any other solutions?
It is possible you have multimedia components rendered using that Structure Group? This may cause some kind of lock. You might try changing the Set Publish States PowerTool for 2009 to set everything to UnPublished in that pub and see if it helps.
Brute force: Start a DB trace, try and delete the Structure Group via the GUI, look for the items it is finding when checking for dependencies.
Or
Open a support ticket, send them the DB, let them take a look at it.
We came across similar issues at a customer. Our initial analysis was to examine the stored procedures that do the delete, and to see what constraints were enforced. On examining the data, we could see records that would not show up in the user interface, but which would prevent the deletion.
We raised a ticket with SDL Tridion customer support, and were able to agree with them which records should be modified in the database.
So that's the take-away from this: you aren't allowed to modify the database, but SDL Tridion customer support can sanction it, but only once they have checked that the changes are correct and necessary. Obviously, if you were to attempt to do such things without the co-operation of support, you'd end up with an unsupported system.
Related
Using Plone as a Document Management System for a Quality Management Program (capitalized buzzwords for added effect...), we are looking for "Reading list" functionality.
This should provide two functions:
Show the end-users which new documents have been updated that they haven't read yet
Show the Quality manager who hasn't read certain updated documents
Added thoughts, preferably the status of the document in the workflow should stay the same (we thought about adding a "pending reading"-state but decided against it).
Together with some of my previous questions answered here of Stackoverflow, we would then be just about ready to roll out what seems to be a ISO9001-compliant document management system, in Plone, open-source, practically all through the web - I can't say I expected this three weeks ago...
Does anybody know of such a product?
I think this can only be done with a custom product.
The way I would do this is by registering a viewlet (or a portlet) associated with the content-type of the document, providing a button.
The button, when clicked, will write the user-id and the content-id somewhere, perhaps in an annotation or in an external database (Redis, MySQL). Additional views, viewlets, and portlets will be needed to provide information to End-users and Managers about what has been read or not.
This is related to a previous question I asked, regarding splitting a asp.net mvc web application into two apps - one public, and one admin. Both of them would be sharing the same database of course.
With two web apps I'll have two host applications and so two seperate Nihbernate session factories. My concern is that their seperate data caches will cause all sorts of problems.
While posting this, I looked at this question, which covers a similar problem (for a different reason). I really don't want to have to set up some kind of distributed cache just for a low use admin application.
How can this be resolved? Is separating the admin section into it's own application not possible with NHibernate without drastic measures?
Thanks.
We run this successfully although some discrepancy in the data is always a problem. However, since the 2nd level cache is configurable per site, you can disable as well as turn it down for specific cache areas on your manager.
The 2nd level cache will only be used for reading, since explicit updates will be flushed down and persisted directly.
If your concern is that content on the site will be "old" once modified, some sort of trigger will be needed to instruct the site to evict the cache. NHibernate will then evict all 2nd level cache for a specific entity type if I remember it correctly.
I think your problem with concurrency will be minimal if your site vs your admin will update different entities. For example in a webshop:
Site will create orders, modify customers etc but only read products, prices and categories
Admin will modify orders, products, prices and categories but only read customers
You can however instruct NHibernate to only update modified fields/properties on your objects for entities that you are concerned about concurrency issues with dynamic-update="true" on your mapping. This won't fully solve your problem, but minimize concurrency issues.
First, you should know that NHibernate doesn't enable second-level cache by default.
So, actually you don't even need any additional steps to complete to just not to use distributed cache. Just use your "Admin" ISessionFactory and don't enable any L2 cache for that.
It could be a sort of problem inside a single App/Factory but you already solved that problem by dividing them into 2 different physical apps.
Does anybody know of an easy way to serialise Umbraco settings (Document Types, Media Types etc) to the file system in order to manage that data within source control?
Note: changes to settings made on the file system need to be easily integrated back into the CMS database.
Also, does anybody know of a way to package up settings from a development environment for rolling out to staging and live environments?
Looking back through my unanswered questions, providing updates where possible.
For reference, you can use uSync to serialise content from Umbraco:
https://our.umbraco.org/projects/developer-tools/usync
There is no method currently other than rolling your own package to do it but it should be relatively straight forward using the API. Check out the "Backing up Document Types" article as a starting point .
Your second point about deployment was something discussed in an open session at the Umbraco Codegarden this year and no conclusion was made that was a one answer fits all.
There are certain tables that get called often but updated rarely. One of these tables is Departments. So to save DB trips, I think it is ok to cache this table taking into consideration that the table has very small size. However, once you cached it an issue of keeping the table data fresh occurs. So what is the best way to determine that the table is dirty and therefore requires a reload and how that code should be invoked. I look for solution that will be scalable. So updating the cache on single right after inserting will not resolve the issue. If one machine inserted the record all other on the farm should get notified to reload the cache. I was thinking for calling corresponding web service from T-SQL but don't really like the idea of consuming recourses on sql server. So what are the best practices to resolve this type of problems.
Thanks in advance
Eddy
There are some great distributed caching frameworks out there. Have a look at NCache and Velocity. NCache has some great features for keeping the cached data in sync between different cache nodes as well as the underlying database. But it comes at a price.
Have you tried using sql dependencies or cache dependencies? The library will pole the database every so often to see if the data has changed. An alternative is to use cache dependencies too. You can have a master cache object and have child caches depend on it. so if the master cache change the child caches will be updated.
Edit:
If the above is not a solution you can easily use memcached.net -- wikipedia. Geared toward large sites but it is a solution for your problem.
Here is an article that describes the thinking around setting up a cache.
http://www.javaworld.com/javaworld/jw-07-2001/jw-0720-cache.html?page=1
Generally speaking objects in a cache have lifetimes and when the lifetime expires they are re-fetched from the database. If the data is not so important this eventual consistency allows for a mixture of performance and accuracy of presented information.
Other cache tools add in additional techniques to keep data more accurate i.e. if a particular object is known to be updated then repopulate after the update command is executed.
Is it possible to query the Crystal CMS database and get meaningful data back? The data appears to be encrypted.
I am running Business Objects Crystal Report Server version 11.5
Actually what I discovered I needed to do was use the administration tools available from the Administration Launchpad. I was not responsible for installing Crystal and did not even realise this existed. The query builder and also the "Report Datasources" feature that were available from here was exactly what I needed.
Use the Query Builder tool to query the CMS: http://[server]/businessobjects/enterprise115/WebTools/websamples/query/. For more information about the query language, see http://devlibrary.businessobjects.com/businessobjectsxi/en/en/BOE_SDK/boesdk_dotNet_doc/doc/boesdk_net_doc/html/QueryLanguageReference.html#2146566.
The properties that are returned by this query are store in a serialized state (I'm guessing binary and encrypted) in the Properties field in the infoobject database table (I can't remember the actual name of the table).
I had a similar problem on my workstation at the office. It sounds like you need to reinstall (that's what worked for me). This is a known bug according BussinessObjects (I had to call them and use our maintenance support). Hopefully you can find more information by searching for, 'Crystal Business query corruption' instead of calling them if the reinstall doesn't work for you.
They told me the data is not encrypted, but occasionally components don't install correctly and the queries come back in a binary form that is all garbled.
Good luck!
There are also several third party solutions out there that naturally layer "on top of" the CMS or Central Management Server to abstract the proprietary storage format into human-readable form. We develop a native database driver to the CMS which can be found at http://www.infolytik.com/products.
full disclosure: I'm the main developer and founder of Infolytik.
My experience is that the data is not encrypted but that it is not really readable. Your best option is to use the Auditor Universes to build you some reports. You can also check out the SQL that the auditor Universes are uses as a baseline for constructing additional reporting.