i have 10 files. i want to delete them but the condition here is
as i delete file 1 to file 5 .let say in file6 i got an exception then in that case i should restore all the files deleted
i should delete the files only if there is no exception occured in any of the file when we try to delete
something like rollback transcation what we do in DB. is there any properties in file we can achive this concept
anyhelp would be great thank you
You could do this in two phases. First, rename all the files (or more them). Then, once you're happy that all the files are deletable you can go and really delete them. If not, then you rename them back (or move them back) to their original state.
The actual deletion could happen as a batch job as well.
If transactional NTFS doesn't work out for you, you could take the route of copying all the files to a temp location before deleting... then catch any exceptions. If there are exceptions, check to see if each file exists, and if not just copy back any that are missing from the temp location.
If the files don't tend to be too large, you could also consider storing the files in a relational database where you get that kind of transactional support and more.
You can have a look at Transactional NTFS which does what you want it to. I've not tried it myself but here is a link to use it in .net
http://code.msdn.microsoft.com/txfmanaged
Related
I have conentstore configured in below content store location.
D:\alfresco-content-services\alf_data\contentstore**2019**
I want to delete above shown 2019 folder under conentstore. I dont need 2019 anymore. Basically purging .
If i delete files above folder ,will it clean-up the metadata and indexes also ?
or will it corrupt my respository ? Whats the best way to achive mass deletion , which will delete references in database also without corrupting repo ?
Thanks & Regards
Brijesh
If you delete any folder from the content store, it will not affect the database (and hence the indexes) in any way. You will end up with nodes referencing .bin files that do not exist anymore, though.
Note, if that folder is the first year in your content store, then it also contains some files used by Alfresco to determine if the content store matches the database. Depending on this, if you delete the folder - you will mess up Alfresco (repository will not start if it does not find those files).
Mass deletion in general is tricky, I'd suggest using Bulk Import Tool's delete web script that does this as fast as possible (avoids audit logs, recycle bin, etc).
According to these questions:
Automatically Delete Files/Folders
how to delete a file with R?
the two ways to delete files in R are file.remove and unlink. These are both permanent and non-recoverable.
Is there an alternative method to delete files so they end up in the trash / recycle bin?
I wouldn't know about a solution that is fully compatible with Windows' "recycle bin", but if you're looking for something that doesn't quite delete files, but prevents them from being stored indefinitely, a possible solution would be to move files to the temporary folder for the current session.
The command tempdir() will give the location of the temporary folder, and you can just move files there - to move files, use file.rename().
They will remain available for as long as the current session is running, and will automatically be deleted afterwards . This is less persistent than the classic recycle bin, but if that's what you're looking for, you probably just want to move files to a different folder and delete it completely when you're done.
For a slightly more consistent syntax, you can use the fs package (https://github.com/r-lib/fs), and its fs::path_temp() and fs::file_move().
I use BerkeleyDb in a project.
I have some environments composed by several file. Sometimes, I need to remove some of these files.
When I remove file with the filesystem, opening the environment raise an error No such file or directory.
Do exists a way to safely removing a file in BerkeleyDb environment ?
To remove a database, you need to be absolutely certain that no references to the database exist in the environment. The most foolproof way to do this is as follows:
Use db->remove() to remove the database from inside your application.
Use dbenv->txn_checkpoint() to flush all changes, checkpoint the log, and then flush the log.
Use dbenv->txn_checkpoint() with the DB_FORCE flag to push one more checkpoint through, ensuring that when the environment is recovered that it doesn't attempt to recover databases that predated the last checkpoint.
Step 3 sounds insane, I know. And maybe it isn't needed any more. But it certainly was needed in the not-too-distant past. Certainly steps 1 and 2 are needed. You'll need to experiment to see if step 3 is necessary for your application.
I have an elevated process and I want to make sure the SQLite files that it creates are readable by other processes. For some reason umask doesn't seem to do what I want (set permissions of sqlite file created by process).
I'm using write-ahead logging, so -wal and -shm files are created in addition to the database file. I want all 3 to be chmodded correctly.
I wonder if it's possible to get in after the SQLite file is created and chmod it.
Possible approaches:
touch all 3 files before SQLite tries to create them, then chmod and hope the mask stays the same
Intercept when the files are created and chmod them.
Work out how to get umask to work for the process.
Mystery option four.
What's the best way to go?
Questions for approaches:
Will SQLite be OK with this?
Do we know when all 3 files are created? Is there some kind of callback I can give a function pointer to? Do we know if the same wal and shm files are around forever? Or are they deleted and re-created?
You can touch the database file before opening it. (When you use the sqlite3 command-line tool to open a new file, but do nothing but begin; and commit;, SQLite itself will create a zero-sized file.)
If you want to intercept file operations, you can register your own VFS.
The -wal and -shm files are created dynamically, but SQLite will give them the same permission bits as the main database file. The comments for robust_open() in os_unix.c say:
If the file creation mode "m" is 0 then set it to the default for
SQLite. The default is SQLITE_DEFAULT_FILE_PERMISSIONS (normally
0644) as modified by the system umask. If m is not 0, then
make the file creation mode be exactly m ignoring the umask.
The m parameter will be non-zero only when creating -wal, -journal,
and -shm files. We want those files to have exactly the same
permissions as their original database, unadulterated by the umask.
In that way, if a database file is -rw-rw-rw or -rw-rw-r-, and a
transaction crashes and leaves behind hot journals, then any
process that is able to write to the database will also be able to
recover the hot journals.
Everything I have read so far, it seems as though you copy the DB from assets to a "working directory" before it is used. If I have an existing SQLite DB I put it in assets. Then I have to copy it before it is used.
Does anyone know why this is the case?
I can see a possible application to that, where one doesn't want to accidentally corrupt database during write. But in that case, one would have to move database back when it's done working on it, otherwise, next time program is run will start from "default" database state.
That might be another use case - you might always want to start program execution with known data state. Previous state might be set from external application.
Thanks everyone for your ideas.
I think what I might have figured out is that the install cannot put a DB directly to the /data directory.
In Eclipse there is no /data which is where most of the discussions I have read say to put it.
This is one of the several I found:
http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-4/#comment-37008