I recently attempted to move some files by running an -exec mv command with find (command linked below). When I did this, I mistyped the destination directory path (so the directory did not yet exist) and mv created what appears to be an executable instead of a directory?
When I run "Get Info" one image renders and the file size is about the correct size for an image, but hundreds of files were supposed to be copied. Have I lost this data for good? Is there any way to get macOS to recognize this "executable" as a directory?
This is the command I used:
find . -type f -name "*.JPG" -exec mv {} ../../DestinationFolderName \;
Here's an image showing a successful mv into an existing directory, and what happened when I put a path to a directory that did not yet exist.
Unfortunately "mv" to a name that doesn't exist is interpreted as a filename rather than a directory. So the OS has, one-by-one, copied your JPG file on top of each other. The resulting file is most likely whatever JPG happened to be the one it moved last (if you rename it to JPG extension you can check which one).
So, very unfortunately, you probably need to investigate a data recovery tool for MacOS quickly (and do so before you've done things that make more files on your disk, as much a possible). The "ghosts" of the files are for now at least mostly still present on your hard drive (as deallocated segments), but are back in the pool to be overwritten as you create new files (even when your browser creates temporary cache files, and things like that). It's a conundrum.
If you don't have a backup/time-machine of the files, the best thing to do is get a MacOS data recovery program QUICKLY.
VERY sorry not to have a happier answer.
Related
Suddenly grep command stopped working. When I did the ls -l ~/grep showing the one file in my home directory.But this file has been present for ages. If I give command which grep --> pointing to /bin/grep and with /bin/grep it is working fine. Can anyone please suggest.
Thanks,
Regards,
Shiv
You can delete the zero-byte file in your home directory. It's not doing anything. (I don't know how it got there.) The problem is that the first entry in PATH, ".", points to whatever directory you're in. So when you're in your home directory, the shell (bash, I assume) looks for grep in the current directory, and finds the file that's there, which can't do anything.
I consider it a bad idea to have "." in your path. It's convenient, and natural if you're coming from the Windows world, but it means that what gets executed can change depending on what directory you're in (as you have now seen). It also means that if you're on a multiuser system, someone can put an executable in one of their directories, and then when you cd into their directory, all of a sudden you're executing their code, which might not be what you want, and could be dangerous.
Instead, remove ".:" (dot colon) from your PATH. When you need to run a script in the current directory, add "./" to its name to execute it. "/bin" and "/usr/bin" should usually be at the front of the list. Some people prefer to put "/usr/local/bin" at the front of the list, or something else.
You can change your PATH by editing .profile or .bash_profile or .bashrc. It depends on how you have your shell set up. Be careful to separate each directory path in PATH with one ":" character.
I am having a problem with something seemingly very simple in Unix. I used the following code to move a file to another directory:
mv genes.gtf ./ ../..
The file is no longer in the original directory, but it has not shown up in the destination directory either! Has anyone experienced a similar thing before? What is causing the problem? Is it possible for it to take a while for a file to be moved, so it shows up in the destination directory with a big delay?
When 3 arguments are passed to mv, the first two are considered sources, and the last one is considered the destination. It seems you moved both genes.gtf and the current directory (./) to ../..
I think what you meant to write was mv genes.gtf ../..
As far as what happened to your file, I have no idea; I've never attempted to move ./ anywhere in unix/linux before.
Based on lot of research below is what I have come up with
find /Some_Dir -type f -mtime +30 -delete -printf "%TD %p\n" >> /Logfile.txt 2>&1
This is doing a good job of deleting the files and it also deletes files with spaces. One concern I have is that this is deleting the files which is just ready only or even files with 000 permission. Is that expected result?
Yes. You are not modifying the files, you are modifying the containing directory.
To forbid deletion of files, you need to either deny write access to the directory, or set the sticky bit (more accurately but less mnemonically known as the "restricted deletion flag") on the directory and ensure the user trying to delete the file does not own the file or the directory. World-writable directories like /tmp will generally have the sticky bit already turned on.
If you just want to change the behavior of the find command, use -writable or -perm.
I have a Nexenta NAS I'm running as an SMB fileserver. So basically openindiana. 1 gigantic SMB share, shared with windows, OS X and linux clients. Somehow I ended up with a whole tree of files that have been named with their full windows path. So for example:
"clientname - clientnameseriespreview\images\imagename.gif" That whole thing is a filename, not a path.
This causes issues with both client access to those files, and backups, because both of those happen thru SMB, so although the \ char is fine as far as the Mac and Linux clients go, they can't see or touch them over SMB.
On the server itself I have tried every method I could find to rename them. Properly escaped, find has no problem finding them but "rename" doesn't seem to be able to touch them at all. Mv works fine, but I'm not sure how to script that to run against the whole tree(thousands of files).
Pretty much every reference I've found for this kind of renaming is along the lines of:
find -name '*\\*' -type f | rename 's/\\/_/g'
Of which the find part works, but rename does nothing. Not quite sure what I'm missing...
Thanks.
I originally had three files: makefile, readme.txt, and hashtable.c in my directory, where I am writing my code in emacs. I noticed that some new files: #hashtable.c#, #readme.txt#, hashtable.c~, and makefile~ have been created. I was wondering what these files were. Are these important, and if not, how do I tell emacs to stop making them? I'm also curious why readme.txt doesn't get a tilde file and makefile doesn't get a sharp file.
The file with the ~ is a backup file that automatically gets created when you save a file. The #readme.txt# is the file being currently edited/in use (i.e., the autosave version). That will usually go away (unlike the ~ file) when you exit emacs normally (if it crashes or gets killed the # files may stay around).
You might find this page about emacs backup files of interest, and this SO question: How do I control how Emacs makes backup files?
You can prevent backup files from being created with this:
(setq make-backup-files nil)
I recommend installing no-littering. It automatically puts backup files (file~) in ~/.emacs.d/var/backup/. It doesn't do anything about autosaves (#file#), but there is a note about putting those files in a specified directory in the README:
(setq auto-save-file-name-transforms
`((".*" ,(no-littering-expand-var-file-name "auto-save/") t)))
Neither of these things actually prevents Emacs from creating these files, but I'm assuming most people actually want these files (in case of a crash), but don't want them strewn all over the filesystem.
For #files# you have to do rm "#file#" from the terminal, because rm #file# doesn't work.
For ~file you can simply digit rm ~file.
Maybe you could try:
find . -name \\#*\\# | xargs rm
Warning: this will remove those files matching in subdirectories.