Drupal 7 deleting node does not delete all associated files - drupal

One file gets upload when the node is created via standard Drupal.
Later, 2 files are added to the node via:
file_save(std Class)
file_usage_add(std Class, 'module', 'node', $node_id)
At the end, I end up with 3 entries in file_managed and file_usage.
Problem: when I delete the node via standard Drupal, the file that was added during the initial node creation gets removed, but not the 2 that were added later. These files remain in both tables, and physically on the disk.
Is there some flag that is being set to keep the files even if the node is deleted? If so, where is this flag, and how do I set it correctly (to be removed along with the node)?

The answer is in the file_delete() function, see this comment:
// If any module still has a usage entry in the file_usage table, the file// will not be deleted
As your module has declared an interest in the file by using file_usage_add() it will not be deleted unless your module explicitly says it's OK to do so.
You can either remove the call to file_usage_add() or implement hook_file_delete() and use file_usage_delete() to ensure the file can be deleted:
function mymodule_file_delete($file) {
file_usage_delete($file, 'mymodule');
}

You can force deleting of file.
file_delete($old_file, TRUE);
But make sure that this file is not used in other nodes using:
file_usage_list($file);

Related

How to add all files (added, modified, deleted) to index using JGit?

I am trying to use JGit to add all files to the index (stage area).
By doing git.add().addFilepattern(".").call() I get to add modified and new ones. But not deleted ones.
How to add all deleted ones as well?
I tried git.add().addFilepattern("-u") but it does not work.
Related question (about adding specific deleted files, not all deleted files): How can I use JGit to add deleted files to the index?
Have you tried using setUpdate(boolean) ?
// fair warning : not tested, I only read JGit's doc
git.add().setUpdate(true).call()
// or
git.add().setUpdate(true).addFilepattern(".").call()

Why is object in AOT marked with red cross?

I have to extend report's query to add a new field.
I've created extension of a query, joined required datasources and can see new field in the list of fields.
For some reason the report in the AOT is displaying with red cross sign:
In properties i can see error in metadata: "There was an error reading metadata. Make sure the metadata xml file(s) are accessible, are well formed and are not corrupted with duplicate or missing xml elements.
The exception message is: Element named: 'Copy1' of type 'ModelElement' already exists among elements: 'Copy1'.
Parameter name: item
Additional information:
AOT/Reports/Reports/WHSInvent"
There is an .xml of that object in packages local directory, there are no any duplicate names in any node of that report.
Any ideas how it can be fixed?
I've run into this before and there are a two things that come to mind.
Often times it's due to an incorrect merge where changes are merged and metadata is accidentally duplicated (in your case it's possible there are two xml nodes with the same name/id in the .rdl file)
If this report is checked in with corrupt metadata, you need to manually modify the RDL file, which is not great, but hopefully the error contains enough hints. Open the report rdl file in your favourite editor (report likely located in a similar path as this: K:\AosService\PackagesLocalDirectory\YOURMODEL\Reports) and look for an xml node with an attribute Name="Copy1". With luck, you have two duplicate nodes next to each other due to the merge. Remove the offending duplicate node, save, and refresh the AOT in Visual Studio.
If the error is in your local changes only (xml file is corrupted for whatever reason) and you are sure that your source control contains the correct version and you simply wish to overwrite the local contents with the source controlled version, follow these steps. Note: this will overwrite local changes.
First, undo pending changes.
Then force a get latest:

How to Track deleted file in alfresco

I do not know this question is common or not.
I want to track record of those files that is deleted in trashcan.I just read that after deletion it will go in content-store.Deleted so from where how can i get the details of deleted files.
You can use the searchservice to find all the nodes in the trashcan. These exists in archive://SpacesStore, just like you would in workspaces://SpacesStore .
String query = "#cm\\:title:mytitle.doc";
searchService.query(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE, SearchService.LANGUAGE_FTS, )
If you delete a file, it will go into trashcan and into archive://SpacesStore . It will remain there forever, unless you (or the optional trashcan cleaner module) empty it out of the trashcan. After you empty the trashcan, it will still remain in archive://SpacesStore for 14 days. After those 14 days it will be removed from the DB and the content will be moved to contentstore.deleted
Nodes that have been emptied out of the trashcan, have all references(metadata) in the DB deleted, so they can no longer be accessed programatically. The only thing that remains is the raw content in contentstore.deleted.
A good explanation of alfresco content deletion is here:
http://blyx.com/2014/08/18/understanding-alfresco-content-deletion/
Phase 2- Any user or admin (or trashcan cleaner) empties the trashcan:
That means the content is marked as an “orphan” and after a
pre-determined amount of time elapses, the orphaned content item ris
moved from the alf_data/contentstore directory to
alf_data/contentstore.deleted directory. Internally at DB level a
timestamp (unix format) is added to alf_content_url.orphan_time field
where an internal process called contentStoreCleanerJobDetail will
check how many long the content has been orphaned.,f it is more than
14 days old, (system.content.orphanProtectDays option) .bin file is
moved to contentstore.deleted. Finally, another process will purge all
of its references in the database by running
nodeServiceCleanupJobDetail and once the index knows the node has bean
removed, the indexes will be purged as well.

SAP HANA custom dictionary: full-text-index not generated or updated

There are two problems with SAP HANA custom dictonaries.
Updating and recompiling the dictionary has no effect on the full-text-index table (even by dropping and generating the full-text-index again)
using custom dictionaries & configuration may lead to an empty fulltext-index-table
For the 1. Problem
deleting the configuration file and replace it with a new file (same content but different file name) then activating all changes (activates the deletion of the old config and adds the new config) seems to be a work-around.
Note: this means you also have to change the configuration name in the SQL command.
For the 2. Problem
Check this trace file:
/usr/sap/HDB/HDB00/hanadb/trace/preprocessor_alert_hanadb.trc
This error message:
File read Error '/usr/sap/HDB/SYS/global/hdb/custom/config/lexicon//EXTRACTION_CORE_MOD2', error='Storage object does not exist: $STORAGEOBJECT$'
occurs if the configuration file EXTRACTION_CORE_MOD2 is not properly activated in the repository under sap.hana.ta.config. So double check the repository if the configuration file exists in the specified path.
For the first problem, I have the same scenario in which I need to make some changes in the custom dictionary and activated it. It did not affect my index table unit I run the following statement:
ALTER INDEX MYINDEX REBUILD;
I have checked it and the changes affect the index table by this statement. So you do not have to remove your index or save the changes of your custom dictionary in a file with new name.

Clearcase - Find out when view was created

We have many old snapshot views lying around and I need to find out when these snapshot views were created.
There is a twist - we are no longer running ClearCase and the hardware we used to run it is no longer around. However, we still have all the files used internally by ClearCase still lying around, so I can go to a directory /usr7/viewstore/some_snapshot_sv and poke around.
I've got a timestamp on these directories, but this is not necessarily when the view was created.
I was wondering if somewhere in this directory structure there was a certain file in which I can search for a creation date.
I'm in a Unix/Linux environment. ClearCase did run on IRIX.
Thanks.
Any metadata associated with the view is on the view server side, where the view storage are kept.
The one file which could be the closest from the creation date would be the .hostname file within a view storage.
It is only created and updated on the view creation, and never change unless the view is unregistered, and then re-registered (very rare).
view.dat is also a good candidate (but can also be regenerated, and is for snapshot view only)
This IBM article lists all relevant files:
Files that are regenerated automatically when the view is restarted:
# .access_info
# .pid
Files that can be regenerated with ClearCase commands:
# .compiled_spec -- regenerate by running cleartool setcs -current
# .hostname -- regenerate by unregistering and re-registering the view
# view.dat -- Snapshot views only:
can be regenerated by running the "regen_view_dot_dat.pl" script
found in <cc-home-dir>\etc\utils
See technote1204161 for more details on the regenerating the view.dat file.
Files that can be manually replaced by making a new view on the same machine as the same user, and copying the affected file(s) to the view storage:
# config_spec
# groups.sd
# identity.sd
# view_db.state (as long as the view is not in the process
of being reformatted); see technote 1134858 for more information
# db/view_db.dbd (for schema 9 views only; 2002.05.00 and earlier)
# db/view_db_schema_version
# .view - The copy obtained from the new view must be edited to contain the correct information for the old view as described below. The correct information can be obtained from the output of "cleartool lsview -long <old_viewtag>".
Line 1: the location of the view storage directory, in hostname:pathname format
Line 2: the view's UUID (unique identifier), which must not be changed
Line 3: the hostname specified in line 1
Files that cannot be replaced:
# All other files in the db directory except the ones mentioned above
( view_db_schema_version and view_db.dbd)
If you use cleartool, I think you may try it this way:
cleartool lsview -properties [view-name]
* [view-name] /net/...[path]
Created 2014-01-07T18:05:15+02:00 by ...
Last modified 2014-01-07T21:13:07+02:00 by .....
Last accessed 2014-01-07T21:13:07+02:00 by .....
Owner: [owner-name] : rwx (all)
Group: [group-name] : r-x (read)
Other: : r-x (read)

Resources