Figured maybe someone here might know whats going on, but essentially what I have to do is take a directory, and make a tar file omitting a subdir two levels down (root/1/2). Given it needs to work on a bunch of platforms, the easiest way I thought was to do a find and egrep that directory out, which works well giving me the list of files.
But then I pipe that file list into a xargs tar rvf command and the resulting file comes out something like 33gb. I've tried to output the find to a file, and use tar -T with that file as input, its still comes out to about 33gb, when if I did a straight tar of the whole directory (not omitting anything) it comes in where I'd expect it at 6-ish gb.
Any thoughts on what is going on? Or how to remedy this? I really need to get this figured out, I'm guessing it has to do with feeding it a list of files vs. having it just tar a directory, but not sure how to fix that.
Your find command will return directories as well as files
Consider using find to look for directories and to exclude some
tar cvf /path/to/archive.tar $(find suite -type d ! -name 'suite/tmp/Shared/*')
When you specify a directory in the file list, tar packages the directory and all the files in it. If you then list the files in the directory separately, it packages the files (again). If you list the sub-directories, it packages the contents of each subdirectory again. And so on.
If you're going to do a files list, make sure it truly is a list of files and that no directories are included.
find . -type f ...
The ellipsis might be find options to eliminate the files in the sub-directory, or it might be a grep -v that eliminates them. Note that -name normally only matches the last component of the name. GNU find has ! -path '*/subdir/*' or variants that will allow you to eliminate the file based on path, rather than just name:
find . -type f ! -path './root/1/2/*' -print
Related
I see plenty of answers on how to list all symlinks and how to remove all symlinks within a specific directory. However what about vice versa?
How would one go about listing/removing all directories within a directory that are not symlinks?
I know that rm -R removes all directories recursively but i want to know how to make it not delete symlinks in the process.
I also know that ls lists all directories files and symlinks however i would like to know how i would go about listing only directories that are not symbolic links.
Found a way finally.
First, run:
find . -depth -type d
to make sure the output looks sane, then:
sudo find . -depth -type d -exec rm -rf '{}' \;
Sure this does get a bit messy on the console to look through, but ... it works! If anyone can find a better and cleaner way to do this please post it.
I am new to perforce and unix. While doing a 'p4 sync' was getting an error "can't clobber writable files" for some of my files.
I then did a "chmod -R 555 ./*" , thinking that it would remove the write permissions for the files that were giving me the above mentioned error. I didn't know that we have different permissions for directories and files in perforce. So now I have set r-x permissions for all directories and files, and now when I try to do a 'p4 sync' I am getting the following kind of error for all the files:
open for write: /home/path_to_file/tmp.18455.196170: Permission denied
What should I do to revert back the original permissions that perforce provides?
An easy way to apply different permissions to files vs directories is to use find, like so:
find . -type d -print0 | xargs -0 chmod 755
find . -type f -print0 | xargs -0 chmod 444
This would apply permissions 755to directories and 444to files.
However, please note I don't know which permissions you have to apply in your case, you may want to look at another installation to get an idea. In your case I suspect the error message comes from the directories missing write permissions.
Also note that using an octal mask with chmod is not necessarily what you want, as it means "assign these permissions"; when you want to "remove" or "add", it's usually better to use a symbolic mode; for example, to remove all three write bits on files only, you would specify a-w(remove w to all fields):
find . -type f -print0 | xargs -0 chmod a-w
Finally, note that you can use find to recursively list permissions of all files and directories, for manual verification:
find . -ls
The error indicates that files in the workspace are writable, but have not been checked out by 'Perforce' (opened for edit).
If you want writable files to be overwritten by files you sync from Perforce, set the 'noclobber' option to 'clobber' in the client spec.
More information about this option and the 'p4 client' command is available here:
http://www.perforce.com/perforce/r15.1/manuals/cmdref/p4_client.html#p4_client.usage
Hope this helps,
Jen.
Hi I am trying to do a find and copy to multiple preferences folders within user files just a plist but coming up on an error. I am hopping someone can help point it out for me or help me understand what I'm doing wrong.
find . -type d -name 'Preferences' -maxdepth 3 -exec cp -r {} /Users/ladmin/Desktop/source.plist *Library/Preferences \;
Running just this
find . -type d -name 'Preferences' -maxdepth 3
prints out what I am trying to copy into username/Library/Preferences
Then I want to copy the plist to the preferences folder of every user.
I hope this isn't too complicated for people to read.
Thanks Kris
Not entirely sure why that should cause an error, though it does have several issues.
find is recursive and cp -r is recursive, and they are both traversing the same tree. You can add the -prune test to find to stop it from descending found directories
Not sure if this effects anything here, but find generally likes options (i.e. -maxdepth) to come first.
*Library/Preferences if this expands to multiple paths all but one of them will get copied into the last one.
But, I think the main issue is that you are trying to copy a bunch of directories named Preferences into a single directory, so only one of them will actually get copied, and the rest will get overridden out.
find -maxdepth 3 -type d -name 'Preferences' -prune -exec echo cp -ivr {} /Users/ladmin/Desktop/source.plist username/Library/Preferences +
This fixes all of the first issues, but it's not clear from the question what should happen when a directory with that name already exists. The -iv will prompt you if you want to override when conflicts occur and add some verbosity. The + speeds up execution of find for commands that can take multiple file/dir names (like cp).
On my Linux system, I've got into a situation where there are not write/execute permissions on directories on a mounted drive. As a result, I can't get into a directory before I open its permissions up. This happens every time I mount that drive. The mounting operation is done by a tool under its hood, so I doubt if could modify mount parameters to address this problem.
As a workaround, I am using this find command to modify permissions on directories. I use it repetitively, since it gets one more level of directories on each run.
find . -type d -print0 | xargs -0 -n 1 chmod a+wrx
I am sure there is a better way to do this. I wonder if there is a find option that processes a directory first and then its contents - the opposite of -depth|-d option.
Any tips?
Try:
chmod +wrx /path/to/mounted/drive/*
Another possibility is to investigate the mount options available for that particular file type (I'm guessing FAT/VFAT here, but it might be something else). Some file systems have mount options for overriding default permissions in some form or other... That would also avoid having to change all the permissions, which might have some effect when you put that file system back to whereever its original source is (is this a memory card from a camera or something or a USB stick, or .... ?)
Thanks to StarNamer at unix.stackexchange.com, here's something that worked great:
Try:
find . -type d -exec chmod a+rwx {} ';'
This will cause find to execute the chmod before it tries to read the directory rather than trying to generate a list and feed it to xargs.
From: https://unix.stackexchange.com/q/45907/22323
I've been stuck on a little unix command line problem.
I have a website folder (4gb) I need to grab a copy of, but just the .php, .html, .js and .css files (which is only a couple hundred kb).
I'm thinking ideally, there is a way to zip or tar a whole folder but only grabbing certain file extensions, while retaining subfolder structures. Is this possible and if so, how?
I did try doing a whole zip, then going through and excluding certain files but it seemed a bit excessive.
I'm kinda new to unix.
Any ideas would be greatly appreciated.
Switch into the website folder, then run
zip -R foo '*.php' '*.html' '*.js' '*.css'
You can also run this from outside the website folder:
zip -r foo website_folder -i '*.php' '*.html' '*.js' '*.css'
You can use find and grep to generate the file list, then pipe that into zip
e.g.
find . | egrep "\.(html|css|js|php)$" | zip -# test.zip
(-# tells zip to read a file list from stdin)
This is how I managed to do it, but I also like ghostdog74's version.
tar -czvf archive.tgz `find test/ | egrep ".*\.html|.*\.php"`
You can add extra extensions by adding them to the regex.
I liked Nick's answer, but, since this is a programming site, why not use Ant to do this. :)
Then you can put in a parameter so that different types of files can be zipped up.
http://ant.apache.org/manual/Tasks/zip.html
you may want to use find(GNU) to find all your php,html etc files.then tar them up
find /path -type f \( -iname "*.php" -o -iname "*.css" -o -iname "*.js" -o -iname "*.ext" \) -exec tar -r --file=test.tar "{}" +;
after that you can zip it up
You could write a shell script to copy files based on a pattern/expression into a new folder, zip the contents and then delete the folder. Now, as for the actual syntax of it, ill leave that to you :D.