Uncompress file on Rackspace cloud files container for CDN - cdn

I have created a Rackspace account earlier today for CDN to serve my Opencart images from Rackspace.
I have created a container where i will upload over 500,000 images, but prefer to upload them as a compressed file, feels more flexible.
If i upload all the images in a compressed file how do i extract the file when it is in the container? and what compression type files would work?

The answer may depend on how you are attempting to upload your file/files. Since this was not specified, I will answer your question using the CLI from a *nix environment.
Answer to your question (using curl)
Using curl, you can upload a compressed file and have it extracted using the extract-archive feature.
$ tar cf archive.tar directory_to_be_archived
$ curl -i -XPUT -H'x-auth-token: AUTH_TOKEN' https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaa-aaa-aaa-aaa?extract-archive=tar -T ./archive.tar
You can find the documentation for this feature here: http://docs.rackspace.com/files/api/v1/cf-devguide/content/Extract_Archive-d1e2338.html
Recommended solution (using Swiftly)
Uploading and extracting that many objects using the above method might take a long time to complete. Additionally if there is a network interruption during that time, you will have to start over from the beginning.
I would recommend instead using a tool like Swiftly, which will allow you to concurrently upload your files. This way if there is a problem during the upload, you don't have to re-upload objects that have alreaady been successfully uploaded.
An example of how to do this is as follows:
$ swiftly --auth-url="https://identity.api.rackspacecloud.com/v2.0" \
--auth-user="{username}" --auth-key="{api_key}" --region="DFW" \
--concurrency=10 put container_name -i images/
If there is a network interruption while uploading, or you have to stop/restart uploading your files, you can add the "--different" option after the 'put' in the above command. This will tell Swiftly to HEAD the object first and only upload if the time or size of the local file does not match its corresponding object, skipping objects that have already been uploaded.
Swiftly can be found on github here: https://github.com/gholt/swiftly
There are other clients that possibly do the same things, but I know Swiftly works, so I recommend it.

Related

How to download multiple files from the Firebase Storage Console?

I have an images directory in Firebase Storage and I am trying to download all the files in that directory from the console. It gives me the option to select all files and a download button appears but when I click it only 1 image is downloaded.
Is there a way to download all the images via the Firebase Console?
You can use the gsutil tool to download all files from your Firebase Storage bucket. Use the cp command:
gsutil -m cp -r gs://{{bucket_url}} {{local_path_to_save_downloads}}
-m performs a multi-threaded download. Use this if you want to download a large number of files to perform parallel downloads
-r to copy an entire directory tree
Just had the same issue. Turns out it's a known issue.
Here's the response from Firebase support:
This is a known bug when trying to download directly on the Storage
Console. Our engineers are working on getting it fixed, however I
can't provide any timelines right now. We apologize for the
inconvenience this may have caused you. For the time being, you may
download the file by right-clicking the image previewed, then choose
"Save image as...".

Embedded jpg from remote raw file, only?

Lets say I have a RAW-image on a remote server, like this www.mydomain.com/DSC0001.ARW, and I would like to only extract the "small" preview image (jpg) from that raw file, without having to download the whole raw-file, is that possible somehow?
Let me preface this with: I am no image processing expert.
Answer to your question
You can show images resized, but this will still download the entire file. To my mind, that means the best approach would be to save a pre-processed thumbnail of that image alongside the raw image. If you use some naming convention like DSC0001.ARW.thumbnail.png, they should be easy to find.
Possible alternative solution on the AWS stack
Probably only a realistic solution if you are willing to get involved with some code and AWS. If you use AWS S3 for storing your images, you could fire an event off to AWS Lambda to run a script which processes your raw file into the thumbnail and save that to S3 for you; whenever you upload a new raw file.
Updated Again
Ok, it appears that the server on which you have the raw files is not actually yours, so you cannot extract the preview image on the server as I suggested... however, you can download just a part of the 25MB image and then extract locally. So, here I download just the first 1MB from a file on a server I don't own onto my local machine and then extract the preview locally:
curl -r 0-1000000 http://www.rawsamples.ch/raws/sony/RAW_SONY_ILCA-77M2.ARW > partial.arw
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 976k 100 976k 0 0 984k 0 --:--:-- --:--:-- --:--:-- 983k
exiftool -b -PreviewImage -W! preview.jpg partial.arw
You may need to experiment with how much of the file you need to download, depending on the camera and its settings etc.
Updated Answer
Actually, it is easier to use exiftool to extract the preview on the server as (being just a Perl script) it is miles simpler to install than ImageMagick.
Here is how to extract the Preview from a Sony ARW file at the command line:
exiftool -b -PreviewImage -W! preview.jpg sample.arw
That will extract the Preview from sample.arw into a file called preview.jpg.
So, you would put a PHP script on your server (naming it preview.php) that looks like this:
#!/usr/bin/php -f
<?php
$image=$_GET['image'];
header("Content-type: image/jpeg");
exec("exiftool -b -PreviewImage -W! preview.jpg $image");
readfile("preview.jpg");
?>
and it will extract the Preview from the parameter named image and send it back to you as a JPEG.
Depending on your server setup and file naming, the invocation will be something like:
http://yourserver/preview.php?image=sample.arw
Note that you will need to do a little more work if the server is multi-user because as it is I have fixed the name of the preview file as preview.jpg which means two simultaneous users could potentially clash.
Original Answer
That's quite easy if you can run ImageMagick on your server, you could run a little PHP script that takes the image name as a parameter, extracts the thumbnail and sends you a JPEG.
I presume you mean a Sony Alpha raw image.

gsutil zip directory on google cloud storage

is it possible to compress file or directory of google cloud storage without download it first and re-upload?
I think I need some tools similar like http://googlegenomics.readthedocs.org/en/latest/use_cases/compress_or_decompress_many_files/
thank you.
No. There is no way to ask GCS to directly compress or decompress objects entirely within GCS. You can certainly copy them elsewhere in the cloud (GCE, for instance) and operate on them there, or you could download an uncompressed object as a compressed object simply by using the Accept-Encoding: gzip header, but, again, not without taking it out of GCS in some fashion.
From
https://console.cloud.google.com/storage/browser/your-bucket
open a cloud shell (top right of the page), then:
gsutil cp -r gs://your-bucket .
zip your-bucket.zip ./your-bucket
gsutil cp your-bucket.zip gs://your-bucket
Strictly it is not happening in place within your storage but within the cloud.

How to download artifacts from Artifactory Server

We have a bunch of jars in the artifactory server (the free version).
How can we download all the jars in a single http request from it ?
Do we need to tar all the jars into a single tar file in order to efficiently download the jars ?
Thanks
Sincerely
Since you are the one who generates the files, you have two options:
As you said, generate the tar before uploading it. You'll still be able to search and browse the files inside it.
Write a afterDownloadError user plugin. Each time the user tries to access a url with .tar in the end, create the tar from the needed files and serve it.

Determine file compression type

I backed up a large number of files to S3 from a PC before switching to a Mac several months ago. Several months later, I'm now trying to open the files and realized the files were all compressed by the S3 GUI tool I used so I can not open them.
I can't remember what program I used to upload the files and standard decompression commands from the command line are not working e.g.,
unzip
bunzip2
tar -zxvf
How can I determine what the compression type is of the file? Alternatively, what other decompression techniques can I try?
PS - I know the files are not corrupted because I tested downloading and opening them back when I originally uploaded to S3.
You can use Universal Extractor (open source) to determine compression types.
Here is a link: http://legroom.net/software/uniextract/
The little downside is that it looks in the first place for the extension, but I manage to change the extensions myself for a inknown file and it works almost always, eg .rar or .exe etc..
EDIT:
I found a huge list of archive programs, maybe one of them will work? It's ridiciously big:
http://www.maximumcompression.com/data/summary_mf.php
http://www.maximumcompression.com/index.html

Resources