I am learning about .tar.gz files while reading about the gzip utility. I read here that tar utility is used only to create an archive and not for compression.
So why do we even need archives when they are same as directories? (A collection of files and folders.)
So that you can put a directory structure into a single file, which makes it possible to send it to someone or to save it. Then they or you can reconstruct the directory structure from that file.
Related
I have an sftp folder where 200 files will be dropped on daily basis.
I need to archive these files and move them to another sftp folder in 1 shot instead of moving 1 file at time.
Basically, I want to archive/move the files into a separate folder without using foreach approach of archiving and writing file by file.
Is there an approach that I can follow to achieve that.
Thx
The SFTP Listener source has a Move To Directory configuration to set a directory to move the files received to that directory.
Wondering if there is a way to download the root folder plus a bunch of sub folders (and sub folders of those folders) with all the files and keep them in their respective folders.
I've tried some firefox plugins like flashgot and download-them-all but they grab the actual web files in addition to the files in the repository, but only if they are visible. For example, if I don't collapse all the folders and expose the files in the repository, the plugins won't detect them.
I would just collapse all the folders and expose the files but these plugins won't recognize the folders...they just download as "foldername".html .... and all the files are mixed together in one folder.
I've also tried visualWget and allowed recursive downloads but again, this only grabs the actual website files, not the files in the repository.
If anyone could help it'd be greatly appreciated. I've been copying them manually but there are literally thousands of files and folders so I'm looking for a quicker solution.
As a client you can only download what's accessible. You either need to know the list of files or crawl the pages for the links, which is what the Firefox plugins do.
There's no way to get a list of files on the server without access to the server beyond http (unless the server has webdav or exposes some other api).
I ended up getting it to work. I used the following command in Terminal.
scp -r username#hostaddress:/file/path/to/directory /path/to/my/computer/directory
-r is for recursive so it downloads all files and directories and subdirectors
If you try this be sure to run this command from your local terminal. I made the mistake of doing it from the SSH connection to the server (no negative effects just frustrating)
Salvete! When we set up the asp.net file-uploading control called "NeatUpload", it saves its files to a temporary location, either "YOUR_APP_ROOT /app_data/NeatUpload_Temp/", if the directory is writable, or to the system's temp folder. However, the demo does not seem to actually upload any files, nor does it include an example for saving the files to a particular directory.
How do we save the file we have uploaded and move the uploaded file to a particular folder? My only clue from the documentation is that it has to do with UploadStorageProvider, but I need some help to implement this.
if you read the documentation 3.3 point 6 :
In your codebehind file, process the uploaded file. If you are using
the InputFile control, the uploaded file's client-specified name, MIME
type, and contents can be accessed via inputFileId.FileName,
inputFileId.ContentType, and inputFileId .FileContent, respectively.
If you want to keep the uploaded file, you must use the
inputFileId.MoveTo()method to move the uploaded file to a permanent
location. If you do not, NeatUpload will automatically remove the
uploaded file at the end of the requestto ensure that unwanted files
do not fill up the filesystem. The following code will put the
uploaded file in the application's root directory (assuming sufficient
permissions):
and so on. I hope this is what you are after.
I may NOT bother with this but if its very simple i may consider it. The site i am working on by design is to hold hundreds of thousands of files. I dont know if we'll have only one download or multiple. Right now the choices are A) Just the file B) An archive that has the file + license and conditions.
I am trying to figure out it can be efficient to offer both and use something like file.open/read prefixing an archive header before it and after it which contains the license and other zip contents. My biggest worries are doing file open/read will not be as efficient as letting the server transmit the file and if its hard to generate and change the contents of the zip dynamically (if a user wishes to change the license or if we want to add other data such as author description, author URL and a permalink on the site)
Is it efficient and how would i create the file dynamically only the original file and data pulled from the database?
PS: I am using debian/apache/asp.net using xsp.net and mono.
SharpZipLib is a very nice stream based library that you can use to create archive files.
You can use zip libraries (System.IO.Packaging.ZipPackage, dotnetzip, SharpZipLib) or event command line programs (say 7zip) for compressing the file
. The library should offer better performance.
However, important thing will be to add an caching layer i.e. zip files should be cached in file system so that they can be served directly if request comes for the same.
I Downloaded the http://ftp.drupal.org/files/projects/nl-6.x-1.5.tar.gz file from Drupal Translations page. The readme file says to "Copy (merge) the content of this translation package into your Drupal installation root directory".
If i look at the package it has a few text files and a modules, profiles and themes folders. If I copy those in the root (so MAMP/sitename/) it overwrites a bunch of files (there are already modules and themes folders there ...) and whatever page I load on the site gives fatal errors.
Is it possible that on MAMP / Mac Unarchiver doesn't do copy/merge but actually Replaces the old modules folder with this new one?
What should happen, is that the relevant .po files are places in the folders where they should be. If things go wrong, what you'll end up having, would be a folder with all the .po's, in a nested set of folders.
In theory I guess it's possible to overwrite folders etc, but you would be opted to allow that first.
So you shouldn't be afraid that your entire Drupal install will be overwritten. You can just try to do the unpack and see what happens. Worst case, is that you'll need to place the files in the correct folders yourself.
I can't speak to Unarchiver, but if you're overwriting files then something's not behaving correctly. All the translation packs really do is add some additional files and folders (e.g. /modules/user/tranlsations/modules-user.nl.po to /modules/user).
My guess is that your whole /modules/user directory (along with all the others) is being replaced, rather than added to.
Have you tried merging the folders in Terminal? You should be able to merge these folders directly from tar. Comment back if you would like more detailed instructions on how to do that.