I used tar -cvf sample_directory/* and didn't specify file.tar.gz. So the Makefile within the folder is in some unreadable format. is there a way to recover my Makefile?
The Makefile within the folder contains the output from the tar command, so it's not "some unreadable format", it's gzipped tar format. that tar archive won't contain your missing Makefile though.
The comments about recovering the Makefile from your backups or from your version control system are apt. This is in fact what you need to do.
If you don't have a backup or the Makefile wasn't checked in to a version control system, then there isn't a feasible way to recover its contents.
Aside from the issue of your poor lost Makefile, a piece of advice about using tar: never tar up a bunch of individual files inside a directory. Always tar up the directory itself instead. There is not much more annoying than untarring an archive that contains a big bunch or files instead of a single directory (which then contains files). Doing that makes a mess by littering files all over the directory that happens to be the current directory. Please be nice to whoever is going to extract your tar files (which might be yourself, later on!), follow convention, and tar up complete directories.
tar -czf file.tar.gz sample_directory
As a bonus, if you do it that way, and you forget the output filename like this:
tar -czf sample_directory
You won't squash anything, you'll just get an error.
Related
I have a file (reviews_dataset.tar.gz) that contains many files which contains data. I am required to extract the files in this archive and then perform some basic commands on the file. So far I have created a directory named (CW) and found a command tar zxvf fileNameHere.tgz but when I run this it of course cannot find my file as I have not "downloaded it" into my directory yet? How do I get this file into my directory so that I can then extract it? Sorry if this is poorly worded I am extremely new to this.
You must either run the command from the directory your file exists in, or provide a relative or absolute path to the file. Let's do the latter:
cd /home/jsmith
mkdir cw
cd cw
tar zxvf /home/jsmith/Downloads/fileNameHere.tgz
You should use the command with the options preceded by dash like this:
tar -zxvf filename.tar.gz
If you want to specify the directory to save all the files use -C:
tar -zxf filename.tar.gz -C /root/Desktop/folder
Figured maybe someone here might know whats going on, but essentially what I have to do is take a directory, and make a tar file omitting a subdir two levels down (root/1/2). Given it needs to work on a bunch of platforms, the easiest way I thought was to do a find and egrep that directory out, which works well giving me the list of files.
But then I pipe that file list into a xargs tar rvf command and the resulting file comes out something like 33gb. I've tried to output the find to a file, and use tar -T with that file as input, its still comes out to about 33gb, when if I did a straight tar of the whole directory (not omitting anything) it comes in where I'd expect it at 6-ish gb.
Any thoughts on what is going on? Or how to remedy this? I really need to get this figured out, I'm guessing it has to do with feeding it a list of files vs. having it just tar a directory, but not sure how to fix that.
Your find command will return directories as well as files
Consider using find to look for directories and to exclude some
tar cvf /path/to/archive.tar $(find suite -type d ! -name 'suite/tmp/Shared/*')
When you specify a directory in the file list, tar packages the directory and all the files in it. If you then list the files in the directory separately, it packages the files (again). If you list the sub-directories, it packages the contents of each subdirectory again. And so on.
If you're going to do a files list, make sure it truly is a list of files and that no directories are included.
find . -type f ...
The ellipsis might be find options to eliminate the files in the sub-directory, or it might be a grep -v that eliminates them. Note that -name normally only matches the last component of the name. GNU find has ! -path '*/subdir/*' or variants that will allow you to eliminate the file based on path, rather than just name:
find . -type f ! -path './root/1/2/*' -print
I originally had three files: makefile, readme.txt, and hashtable.c in my directory, where I am writing my code in emacs. I noticed that some new files: #hashtable.c#, #readme.txt#, hashtable.c~, and makefile~ have been created. I was wondering what these files were. Are these important, and if not, how do I tell emacs to stop making them? I'm also curious why readme.txt doesn't get a tilde file and makefile doesn't get a sharp file.
The file with the ~ is a backup file that automatically gets created when you save a file. The #readme.txt# is the file being currently edited/in use (i.e., the autosave version). That will usually go away (unlike the ~ file) when you exit emacs normally (if it crashes or gets killed the # files may stay around).
You might find this page about emacs backup files of interest, and this SO question: How do I control how Emacs makes backup files?
You can prevent backup files from being created with this:
(setq make-backup-files nil)
I recommend installing no-littering. It automatically puts backup files (file~) in ~/.emacs.d/var/backup/. It doesn't do anything about autosaves (#file#), but there is a note about putting those files in a specified directory in the README:
(setq auto-save-file-name-transforms
`((".*" ,(no-littering-expand-var-file-name "auto-save/") t)))
Neither of these things actually prevents Emacs from creating these files, but I'm assuming most people actually want these files (in case of a crash), but don't want them strewn all over the filesystem.
For #files# you have to do rm "#file#" from the terminal, because rm #file# doesn't work.
For ~file you can simply digit rm ~file.
Maybe you could try:
find . -name \\#*\\# | xargs rm
Warning: this will remove those files matching in subdirectories.
I have a single directory containing multiple zip files that contain .jpg files.
I need to unzip all files and save all contents (.jpgs files) into a single folder.
Any suggestions on a unix command that does that?
Please note that some of the contents (jpgs) might exist with same name in multiple zipped files, I need to keep all jpgs.
thanks
unzip '*.zip' -o -B
as by default these utilities are not installed
see http://www.cyberciti.biz/tips/how-can-i-zipping-and-unzipping-files-under-linux.html regarding installation
read about -B flag to realize its limitations.
I am trying to untar UNIX-based operating system from a .tar.gz file. In order to do so I use the following command:
tar -xvf rootfs.tar.gz -o
The -o flag is to not to preserve the ownership of the files (it gave some problems). The problem is that when a symbolic link is untared the following message shows up
Cannot create symlink to `toto': Operation not permitted
Moreover, mknod also gives problems
dev/tty0: Cannot mknod: Operation not permitted
I am in a FAT system. Does anyone know how to untar that file?
Thanks in advance
If the file is a tar.gz you must use:
tar -xvzf rootfs.tar.gz
And notice that a FAT filesystem doesn't support symbolic links, so it doesn't know how to make it on that FS, and it explains the Operation Not Permitted Error.
+1 fpr Ivan's answer
please note that:
flags always go right after the name of the command!
you will need to study "man tar" to see what other options you want, e.g. preserve owner, permissions, time-creation date, etc..
The correct answer is that if you're trying to untar a UNIX root file system, that's going to include special files such as device nodes (which is why tar is invoking mknod).
To create those successfully, tar must be allowed to run as root. Therefore, the correct answer is to use sudo, like so:
sudo tar -xvzf rootfs.tar.gz
Try this to untar a tar file. Hopefully it will work fine without any problem, as this one solved my issue
tar -xvvf foo.tar