There are double compressed files with extension xxx.zip.gz
On gunzip - xxx.zip file is created of size 0.25 GB
On unzip after gunzip - xxx.zip file extension does not change
Output of unzip :
Archive: xxx.zip
inflating: xxx.txt
also
echo $? shows 0
so, even though zip command completed successfully and still the file remains with zip extension , any help ?
OS - SunOS 5.10
You're finding the xxx.txt is being created, right?
unzip and gunzip have different "philosophies" about dealing with their archive. gunzip gets rid of the .gz file, while unzip leaves its zip file in place. So in your case, zip is working as designed.
I think the best you can do is
unzip -q xxx.zip && /bin/rm xxx.zip
This will only delete the zip file if unzip exits without error. The -q option makes unzip quiet, so you won't get the status messages you included above.
edit
as you asked when zip file itself is +10 GB in size, then unzip does not succeed
Assuming that you are certain there is enough diskspace to save the expanded orig file, then it's hard to say. How big is the expanded file? Over 2GB? SunOS5 I believe, used to have file-size limitation at 2GB, requiring a 'large-file' support to be added into kernel and utilities. I don't have access to Sun anymore so can't confirm. I think you'll find places to look with apropos largefile (assuming your $MANPATH is setup correctly).
But the basic test for did the unzip work correctly would be something like
if unzip "${file}" ; then
echo "clean unzip for ${file}, deleting the archive file" >&2
/bin/rm "${file}"
else
echo "error running unzip for ${file}, archive file remains in place" >&2
fi
(Or I don't understand your use case). Feel free to post another question showing ls -l xxx.zip.gz xxx.zip and other details to help reconstruct your expected workflow.
IHTH.
Related
I recently attempted to move some files by running an -exec mv command with find (command linked below). When I did this, I mistyped the destination directory path (so the directory did not yet exist) and mv created what appears to be an executable instead of a directory?
When I run "Get Info" one image renders and the file size is about the correct size for an image, but hundreds of files were supposed to be copied. Have I lost this data for good? Is there any way to get macOS to recognize this "executable" as a directory?
This is the command I used:
find . -type f -name "*.JPG" -exec mv {} ../../DestinationFolderName \;
Here's an image showing a successful mv into an existing directory, and what happened when I put a path to a directory that did not yet exist.
Unfortunately "mv" to a name that doesn't exist is interpreted as a filename rather than a directory. So the OS has, one-by-one, copied your JPG file on top of each other. The resulting file is most likely whatever JPG happened to be the one it moved last (if you rename it to JPG extension you can check which one).
So, very unfortunately, you probably need to investigate a data recovery tool for MacOS quickly (and do so before you've done things that make more files on your disk, as much a possible). The "ghosts" of the files are for now at least mostly still present on your hard drive (as deallocated segments), but are back in the pool to be overwritten as you create new files (even when your browser creates temporary cache files, and things like that). It's a conundrum.
If you don't have a backup/time-machine of the files, the best thing to do is get a MacOS data recovery program QUICKLY.
VERY sorry not to have a happier answer.
I want to mirror http and ftp directories with wget and I want to identify incomplete downloads of wget. Is there a way that incomplete downloads get an additional file extension like ".part" or ".incomplete"?
Sometimes I don't have a download log and I don't know the exact file size of a file. (The downloads are often not complete because of bad ftp/http server or bad internet connection)
For only one download of a file I could write a kind of wrapper:
wget -c -O file.zip.part http://domain.tld/file.zip
(if finished) mv file.zip.part file.zip
I am not sure how to do it for directories.
Kind regards
matt
When I attempt to extract a huge tar archive, I get the following error:
"filename: No such file or directory found"
Any suggestions on what could be going wrong?
This may happen if the disk is full. If you extract using:
tar -xvf <filename.tar>
you may see the following message before any No such file or directory found:
mkdir failed: Disk quota exceeded
why dont you try to test your tar file first!
file yourfile.tar
(it should say its a tar file if it's not broken)
Then...
tar -tvf yourfile.tar
It should give a listing of the contents of your tar file without actually writing it to disk. Just to check the integrity of it.
Also, if your file is larger tan 2GB it is posible that your tar binary wont work, try gtar instead!
with that info, you can go further...
regards,
Daniel.
I used tar -cvf sample_directory/* and didn't specify file.tar.gz. So the Makefile within the folder is in some unreadable format. is there a way to recover my Makefile?
The Makefile within the folder contains the output from the tar command, so it's not "some unreadable format", it's gzipped tar format. that tar archive won't contain your missing Makefile though.
The comments about recovering the Makefile from your backups or from your version control system are apt. This is in fact what you need to do.
If you don't have a backup or the Makefile wasn't checked in to a version control system, then there isn't a feasible way to recover its contents.
Aside from the issue of your poor lost Makefile, a piece of advice about using tar: never tar up a bunch of individual files inside a directory. Always tar up the directory itself instead. There is not much more annoying than untarring an archive that contains a big bunch or files instead of a single directory (which then contains files). Doing that makes a mess by littering files all over the directory that happens to be the current directory. Please be nice to whoever is going to extract your tar files (which might be yourself, later on!), follow convention, and tar up complete directories.
tar -czf file.tar.gz sample_directory
As a bonus, if you do it that way, and you forget the output filename like this:
tar -czf sample_directory
You won't squash anything, you'll just get an error.
I have a single directory containing multiple zip files that contain .jpg files.
I need to unzip all files and save all contents (.jpgs files) into a single folder.
Any suggestions on a unix command that does that?
Please note that some of the contents (jpgs) might exist with same name in multiple zipped files, I need to keep all jpgs.
thanks
unzip '*.zip' -o -B
as by default these utilities are not installed
see http://www.cyberciti.biz/tips/how-can-i-zipping-and-unzipping-files-under-linux.html regarding installation
read about -B flag to realize its limitations.