Clearcase - Find out when view was created - unix

We have many old snapshot views lying around and I need to find out when these snapshot views were created.
There is a twist - we are no longer running ClearCase and the hardware we used to run it is no longer around. However, we still have all the files used internally by ClearCase still lying around, so I can go to a directory /usr7/viewstore/some_snapshot_sv and poke around.
I've got a timestamp on these directories, but this is not necessarily when the view was created.
I was wondering if somewhere in this directory structure there was a certain file in which I can search for a creation date.
I'm in a Unix/Linux environment. ClearCase did run on IRIX.
Thanks.

Any metadata associated with the view is on the view server side, where the view storage are kept.
The one file which could be the closest from the creation date would be the .hostname file within a view storage.
It is only created and updated on the view creation, and never change unless the view is unregistered, and then re-registered (very rare).
view.dat is also a good candidate (but can also be regenerated, and is for snapshot view only)
This IBM article lists all relevant files:
Files that are regenerated automatically when the view is restarted:
# .access_info
# .pid
Files that can be regenerated with ClearCase commands:
# .compiled_spec -- regenerate by running cleartool setcs -current
# .hostname -- regenerate by unregistering and re-registering the view
# view.dat -- Snapshot views only:
can be regenerated by running the "regen_view_dot_dat.pl" script
found in <cc-home-dir>\etc\utils
See technote1204161 for more details on the regenerating the view.dat file.
Files that can be manually replaced by making a new view on the same machine as the same user, and copying the affected file(s) to the view storage:
# config_spec
# groups.sd
# identity.sd
# view_db.state (as long as the view is not in the process
of being reformatted); see technote 1134858 for more information
# db/view_db.dbd (for schema 9 views only; 2002.05.00 and earlier)
# db/view_db_schema_version
# .view - The copy obtained from the new view must be edited to contain the correct information for the old view as described below. The correct information can be obtained from the output of "cleartool lsview -long <old_viewtag>".
Line 1: the location of the view storage directory, in hostname:pathname format
Line 2: the view's UUID (unique identifier), which must not be changed
Line 3: the hostname specified in line 1
Files that cannot be replaced:
# All other files in the db directory except the ones mentioned above
( view_db_schema_version and view_db.dbd)

If you use cleartool, I think you may try it this way:
cleartool lsview -properties [view-name]
* [view-name] /net/...[path]
Created 2014-01-07T18:05:15+02:00 by ...
Last modified 2014-01-07T21:13:07+02:00 by .....
Last accessed 2014-01-07T21:13:07+02:00 by .....
Owner: [owner-name] : rwx (all)
Group: [group-name] : r-x (read)
Other: : r-x (read)

Related

Why is object in AOT marked with red cross?

I have to extend report's query to add a new field.
I've created extension of a query, joined required datasources and can see new field in the list of fields.
For some reason the report in the AOT is displaying with red cross sign:
In properties i can see error in metadata: "There was an error reading metadata. Make sure the metadata xml file(s) are accessible, are well formed and are not corrupted with duplicate or missing xml elements.
The exception message is: Element named: 'Copy1' of type 'ModelElement' already exists among elements: 'Copy1'.
Parameter name: item
Additional information:
AOT/Reports/Reports/WHSInvent"
There is an .xml of that object in packages local directory, there are no any duplicate names in any node of that report.
Any ideas how it can be fixed?
I've run into this before and there are a two things that come to mind.
Often times it's due to an incorrect merge where changes are merged and metadata is accidentally duplicated (in your case it's possible there are two xml nodes with the same name/id in the .rdl file)
If this report is checked in with corrupt metadata, you need to manually modify the RDL file, which is not great, but hopefully the error contains enough hints. Open the report rdl file in your favourite editor (report likely located in a similar path as this: K:\AosService\PackagesLocalDirectory\YOURMODEL\Reports) and look for an xml node with an attribute Name="Copy1". With luck, you have two duplicate nodes next to each other due to the merge. Remove the offending duplicate node, save, and refresh the AOT in Visual Studio.
If the error is in your local changes only (xml file is corrupted for whatever reason) and you are sure that your source control contains the correct version and you simply wish to overwrite the local contents with the source controlled version, follow these steps. Note: this will overwrite local changes.
First, undo pending changes.
Then force a get latest:

How to Track deleted file in alfresco

I do not know this question is common or not.
I want to track record of those files that is deleted in trashcan.I just read that after deletion it will go in content-store.Deleted so from where how can i get the details of deleted files.
You can use the searchservice to find all the nodes in the trashcan. These exists in archive://SpacesStore, just like you would in workspaces://SpacesStore .
String query = "#cm\\:title:mytitle.doc";
searchService.query(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE, SearchService.LANGUAGE_FTS, )
If you delete a file, it will go into trashcan and into archive://SpacesStore . It will remain there forever, unless you (or the optional trashcan cleaner module) empty it out of the trashcan. After you empty the trashcan, it will still remain in archive://SpacesStore for 14 days. After those 14 days it will be removed from the DB and the content will be moved to contentstore.deleted
Nodes that have been emptied out of the trashcan, have all references(metadata) in the DB deleted, so they can no longer be accessed programatically. The only thing that remains is the raw content in contentstore.deleted.
A good explanation of alfresco content deletion is here:
http://blyx.com/2014/08/18/understanding-alfresco-content-deletion/
Phase 2- Any user or admin (or trashcan cleaner) empties the trashcan:
That means the content is marked as an “orphan” and after a
pre-determined amount of time elapses, the orphaned content item ris
moved from the alf_data/contentstore directory to
alf_data/contentstore.deleted directory. Internally at DB level a
timestamp (unix format) is added to alf_content_url.orphan_time field
where an internal process called contentStoreCleanerJobDetail will
check how many long the content has been orphaned.,f it is more than
14 days old, (system.content.orphanProtectDays option) .bin file is
moved to contentstore.deleted. Finally, another process will purge all
of its references in the database by running
nodeServiceCleanupJobDetail and once the index knows the node has bean
removed, the indexes will be purged as well.

Dealing with the error: A different document with value xxxx already exists in the index

What would cause multiple documents on my catalog to have the same "unique id"? Effectively an error like this:
ERROR Products.ZCatalog A different document with value
'xxxx341a9f967070ff2b57922xxxx' already exists in the index.'
And how do I go about fixing it?
I had the same error today.
In short: the UID index in portal_catalog (ZCatalog UUIDIndex) complains that you are trying to index multiple objects with the same UID
In my case it was caused by a zexp import of a folder that contained images that where already available in another folder.
To reproduce:
copy production buildout, database and blobstorage to staging server
do some changes to staging.com/folder1
move staging.com/galleries/gallery1 to staging.com/folder1
export staging.com/folder1 to folder1.zexp
remove production.com/folder1
use ZMI import/export on production.com/manage to import folder1.zexp
you'll get these errors for the gallery1 folder and all of its content items:
2015-06-15T17:58:22 ERROR Products.ZCatalog A different document with value '618a9ee3544a4418a1176ac0434cc63b' already exists in the index.'
diagnosis
production.com/resolveuid/618a9ee3544a4418a1176ac0434cc63b
will take you to production.com/galleries/gallery1/image1
whereas staging/resolveuid/618a9ee3544a4418a1176ac0434cc63b
will take you to staging.com/folder1/gallery1/image1
production.com/folder1/gallery1/image1 did get cataloged too, but because it has the same uid as production.com/galleries/gallery/image1 results of resolveuid, catalog queries, internal links and such stuff might be random
how to repair
In my case I think it's probably the best to either
delete production.com/galleries/gallery1 and run a clear and rebuild on the portal catalog.
or replace production.com/folder1/gallery by production.com/galleries/gallery1 (delete, cut, paste)
If the objects with the same UID are not actually the same (as in my case) you might be able to give them new and uniques UIDS using object._setUID('new-uid') and rebuild the catalog afterwards.

Attempting to deploy a binary to a location where a different binary is already stored

When I am publishing my page from tridio 2009, I am getting the error below:
Destination with name 'FTP=[Host=servername, Location=\RET, Password=******, Port=21, UserName=retftp]' reported the following failure:
A processing error occurred processing a transport package Attempting to deploy a binary [Binary id=tcm:553-974947-16 variantId= sg= path=/Images/image_thumbnail01.jpg] to a location where a different binary is already stored Existing binary: tcd:pub[553]/binarymeta[974950]
Below is my code snippet
Component bigImageComp = th.GetComponentValue("bigimage", imageMetaFields);
string bigImagefileName = string.Empty;
string bigImagePath = string.Empty;
bigImagefileName = bigImageComp.BinaryContent.Filename;
bigImagePath = m_Engine.AddBinary(bigImageComp.Id, TcmUri.UriNull, null, bigImageComp.BinaryContent.GetByteArray(), Path.GetFileName(bigImagefileName));
imageBigNode.InnerText = bigImagePath;
Please suggest
Chris Summers addressed this on his blog. Have a read of the article - http://www.urbancherry.net/blogengine/post/2010/02/09/Unique-binary-filenames-for-SDL-Tridion-Multimedia-Components.aspx
Generally in Tridion Content Delivery we can only keep one version of a Component. To get multiple "versions" of a MMC we have to publish MMC as variants. By this way we can produce as many variants as we need via templating.
You can refer below article for more detail:
http://yatb.mitza.net/2012/03/publishing-images-as-variants.html#!/2012/03/publishing-images-as-variants.html
When adding binaries you must ensure that the file and it's metadata is unique. If one of the values e.g. the filename appears to be the same but the rest of the metadata does not match, then deployment will fail.
In the given example (as Nuno points out) the binary 910 is trying to deploy over binary 703. The filename is the same but the binary is identified to be not the same (in the case a different ID from the same publication). For this example you will need to rename one of the binaries (either the file itself or change the path) and everything will be fine.
Other scenarios can be that the same image is used from two different templates and the template id is used as the varient ID. If this is the case it is the same image BUT the varient ID check fails so to avoid overwriting the same image the deployer fails it.
Often unpublishing can help, however, the image is only removed when ALL references to it are removed. So if it is used from more than one place there are more open references.
This is logical protection from the deployer. You would not want the wrong image replacing another and either upsetting the layout or potentially changing the content to another meeting (think advertising banner).
This is actual cause and reason for above problem (Something got from forum)

Drupal 7 deleting node does not delete all associated files

One file gets upload when the node is created via standard Drupal.
Later, 2 files are added to the node via:
file_save(std Class)
file_usage_add(std Class, 'module', 'node', $node_id)
At the end, I end up with 3 entries in file_managed and file_usage.
Problem: when I delete the node via standard Drupal, the file that was added during the initial node creation gets removed, but not the 2 that were added later. These files remain in both tables, and physically on the disk.
Is there some flag that is being set to keep the files even if the node is deleted? If so, where is this flag, and how do I set it correctly (to be removed along with the node)?
The answer is in the file_delete() function, see this comment:
// If any module still has a usage entry in the file_usage table, the file// will not be deleted
As your module has declared an interest in the file by using file_usage_add() it will not be deleted unless your module explicitly says it's OK to do so.
You can either remove the call to file_usage_add() or implement hook_file_delete() and use file_usage_delete() to ensure the file can be deleted:
function mymodule_file_delete($file) {
file_usage_delete($file, 'mymodule');
}
You can force deleting of file.
file_delete($old_file, TRUE);
But make sure that this file is not used in other nodes using:
file_usage_list($file);

Resources