We're trying to upgrade an old Tridion 2009 site to Tridion 2011 and some problems occur when we're publishing Pages.
Here is a snippet from the cd_storage_conf.xml:
<Publication Id="78" defaultStorageId="defaultdb" cached="true">
<Item typeMapping="Binary" cached="true" storageId="defaultJSPFile"/>
<Item typeMapping="Page" cached="true" storageId="defaultJSPFile"/>
<Item typeMapping="Metadata" cached="true" storageId="defaultdb"/>
</Publication>
defaultJSPFile references a Filesystem storage and defaultdb references a MSSQL database storage.
And here is the error message from the publishing queue.
66560, Unable to store item inside current transaction,
Could not parse tcd:pub[78]/componentmeta[119939],
Could not parse tcd:pub[78]/componentmeta[119939],
Could not parse tcd:pub[78]/componentmeta[119939],
Could not parse tcd:pub[78]/componentmeta[119939],
Could not parse tcd:pub[78]/componentmeta[119939],
Could not parse tcd:pub[78]/componentmeta[119939],
Unable to store item inside current transaction,
Could not parse tcd:pub[78]/componentmeta[119939] ,
Could not parse
Changing the storageId of the Metadata typeMapping to defaultJSPFile makes the error go away, but we can't make it work to get the metadata into the database.
Problem solved. It seemed the Tridion.ContentDelivery.Interop.Dll wasn't upgraded in the deployer. I'm still not sure what has changed that the deployer would go through the linkinfo folder in the deployment zips.
To summarize, make sure you upgrade EVERYTHING when installing a service pack or a hotfix rollup.
Related
I'm integrating Google Tag Manager in my project and got everything up.
However I'm not sure whether JSON (GTM-XXXX JSON file) should be committed or not.
I checked the code and noticed some ids and I'm not if this file is similar to google-services.json and must not be added/committed to git.
What is the correct approach?
I have set up my sqlproj project structure based on object type using the import Database wizard and using the Object Type as the folder structure. ie the same view as you'd get in SSMS or SQL Server Object Explorer
Yet when using ssdt schema compare to update the project, objects are always imported into Schema\Object Type structure, causing the project to turn into a mess of mixed structure.
I cant find anywhere I can change the behaviour of the Schema Compare update to continue to use the Object Type structure?
using ssdt 14.0.51215 (Dec 2015)
I would suggest submitting this feedback as a suggested improvement to Microsoft at https://connect.microsoft.com/SQLServer/feedback/CreateFeedback.aspx using the category "Developer Tools (SSDT, BIDS, etc.)"
you'll have to update the sqlproj file directly...
<PropertyGroup>
...
<DefaultFileStructure>BySchemaType</DefaultFileStructure>
...
</PropertyGroup>
More info on this link... https://social.msdn.microsoft.com/Forums/sqlserver/en-US/e6bfd561-3e7d-4052-be9c-631037681c3e/default-file-structure?forum=ssdt
I'm trying to read the binary data of a published binary using the following code:
Tridion.ContentDelivery.DynamicContent.BinaryFactory factory = new BinaryFactory();
BinaryData binaryData = factory.GetBinary(uri.ToString());
This worked fine, until I deployed it in an environment where the binaries are stored on the file system rather than the broker database. Now, the BinaryData is always null, even though I'm sure that the file exists.
Is it mandatory to store your binaries in a database if you want to use the BinaryFactory like this? Or am I missing something?
I just ran some tests on my SDL Tridion 2011 SP1 HR1 environment, and can confirm that BinaryData is populated (i.e. not null and contains values) when my binaries are on the file system. I used your code sample, and just added a valid URI of a binary that is used on a page on my website. I am not sure what is different between our environments, my only thought would be to check that BinaryMeta is deployed to your Broker Database (although if this makes a difference I would think it is a bug).
The ItemTypes node of my cd_storage.xml node is as follows:
<ItemTypes defaultStorageId="defaultdb" cached="true">
<Item typeMapping="Binary" storageId="defaultFile" cached="true"/>
</ItemTypes>
So everything except the binaries are in the DB.
I am not sure what version of SDL Tridion you are using (and i have no idea if it would impact this), but I recently heard that storing any metadata on the file system is no longer supported as of 2011 SP1.
We have a requirement that on a page publish, we need to:
Find a component presentation that has a component based upon a particular schema.
Extract certain field vales from that component and store them in a custom database table that's available to our .NET application (on the Content Delivery side).
I think this is a good candidate for either a Deployer extension or a Storage extension - but I'm a little unclear which and why having never written either?
I've ruled out the Event System as this kind of code would be located on the CM, which seems like the wrong "side" to me - my focus is on extending what happens on the CD-side after a page is published.
Read a few articles on Tridion World (this, this, this and this) and I think a storage extension would be the better choice?
Mihai's article seems to be very close to what we need, where he uses a new item type mapping:
<ItemTypes defaultStorageId="brokerdb" cached="true">
<Item typeMapping="PublishAction" cached="false" storageId="searchdb" /></ItemTypes>
But how does Tridion "know" to use this new item type when content is published, (its not one of the defined TYPE_NAMEs, which is kind of the point)?
I should clarify I'm a .NET/C# dev not a Java dev so this is probably really obvious to Java people - apologies if it is!
Cheers
Tridion will not know by default how to deploy your new entity. My advise is to create a Deployer Module (your links should give you enough information about how you can do that) that executes in post-processing phase (of the deployment process), that processes all components from the deployment/transport package, extracts the needed information and uses a custom Storage Extension to store the needed information.
Be careful: you need to set-up in config your new type but you also need to use it yourself from that Deployer Module.
Hope this helps.
I am running Tridion 2011 SP1 and am getting the following warnings in my cd_core.xxxx.xx.xx.log file.
2012-10-17 12:37:50,298 WARN FSTaxonomyDAO - TaxonomyDAO is set to File System, which is not supported. Check your bindings settings and/or license file.
I have removed the following element from the cd_deployer_conf.xml
<Module Type="TaxonomyDeploy" Class="com.tridion.deployer.modules.TaxonomyDeploy"/>
but I am still getting warnings.
I think this problem is causing many of my multimedia components to fail when publishing. If fixing the cause of these warnings doesn't help then I'll have at least narrowed it down.
edit
I forgot to mention that I am using the File System as the Content Data Store
The storage bindings are in cd_storage_conf.xml. You need to check in storage config and should be storage to db. Also you should not remove from deployer if you are using taxonomy.
Update:
Taxonomy storage to file system bindings is not supported as far as I Know it can be only to DB, it is the same from release 2009 and What you are seeing is the WARNING message that you are using non supported. I am not sure if you can disable this binding.
Also, Metadata stored on the local file system is deprecated as of SDL Tridion 2011 SP1, in favor of storage of metadata in the database. SDL Doc reference link.