Lets say I have a RAW-image on a remote server, like this www.mydomain.com/DSC0001.ARW, and I would like to only extract the "small" preview image (jpg) from that raw file, without having to download the whole raw-file, is that possible somehow?
Let me preface this with: I am no image processing expert.
Answer to your question
You can show images resized, but this will still download the entire file. To my mind, that means the best approach would be to save a pre-processed thumbnail of that image alongside the raw image. If you use some naming convention like DSC0001.ARW.thumbnail.png, they should be easy to find.
Possible alternative solution on the AWS stack
Probably only a realistic solution if you are willing to get involved with some code and AWS. If you use AWS S3 for storing your images, you could fire an event off to AWS Lambda to run a script which processes your raw file into the thumbnail and save that to S3 for you; whenever you upload a new raw file.
Updated Again
Ok, it appears that the server on which you have the raw files is not actually yours, so you cannot extract the preview image on the server as I suggested... however, you can download just a part of the 25MB image and then extract locally. So, here I download just the first 1MB from a file on a server I don't own onto my local machine and then extract the preview locally:
curl -r 0-1000000 http://www.rawsamples.ch/raws/sony/RAW_SONY_ILCA-77M2.ARW > partial.arw
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 976k 100 976k 0 0 984k 0 --:--:-- --:--:-- --:--:-- 983k
exiftool -b -PreviewImage -W! preview.jpg partial.arw
You may need to experiment with how much of the file you need to download, depending on the camera and its settings etc.
Updated Answer
Actually, it is easier to use exiftool to extract the preview on the server as (being just a Perl script) it is miles simpler to install than ImageMagick.
Here is how to extract the Preview from a Sony ARW file at the command line:
exiftool -b -PreviewImage -W! preview.jpg sample.arw
That will extract the Preview from sample.arw into a file called preview.jpg.
So, you would put a PHP script on your server (naming it preview.php) that looks like this:
#!/usr/bin/php -f
<?php
$image=$_GET['image'];
header("Content-type: image/jpeg");
exec("exiftool -b -PreviewImage -W! preview.jpg $image");
readfile("preview.jpg");
?>
and it will extract the Preview from the parameter named image and send it back to you as a JPEG.
Depending on your server setup and file naming, the invocation will be something like:
http://yourserver/preview.php?image=sample.arw
Note that you will need to do a little more work if the server is multi-user because as it is I have fixed the name of the preview file as preview.jpg which means two simultaneous users could potentially clash.
Original Answer
That's quite easy if you can run ImageMagick on your server, you could run a little PHP script that takes the image name as a parameter, extracts the thumbnail and sends you a JPEG.
I presume you mean a Sony Alpha raw image.
Related
I want to provide support to convert single-page and multi-page tiff files into PDFs. There is an executable in Bit Miracle's LibTiff.NET called Tiff2Pdf.
How do I use Tiff2Pdf in my application to convert tiff data stream (not a file) into a pdf data stream (not a file)?
I do not know if there is an API exposed because the documentation only lists Tiff2Pdf as a tool. I also do not see any examples in the examples folder using it in a programmatic way to determine if it can handle data streams or how to use it in my own program.
libtiff tools expect a filename so the background run shown below is simply from upper right X.tif to various destinations, first is default
tiff2pdf x.tif
and we can see it writes a tiff2pdf file stream to console (Standard Output) however it failed in memory without a directory to write to. However on second run we can redirect
tiff2pdf x.tif > a.pdf
or alternately specify a destination
tiff2pdf -o b.pdf x.tif
So in order to use those tools we need a File System to receive the file objects, The destination folder/file directory can be a Memory File System drive or folder.
Thus you need to initiate that first.
NuGet is a package manager simply bundling the lib and as I don't use .net your a bit out on a limb as BitMiricle are not offering free support (hence point you at Stack Overflow, a very common tech support PLOY, Pass Liability Over Yonder) however looking at https://github.com/BitMiracle/libtiff.net/tree/master/Samples
they suggest memory in some file names such as https://github.com/BitMiracle/libtiff.net/tree/master/Samples/ConvertToSingleStripInMemory , perhaps get more ideas there?
Although I am able to upload a 5MB json file, I can't upload a 50MB json file using the command-line tool for uploading to firebase: firebase-import.
When I run the upload on the 50MB json file, it prints:
"
Reading ... [path to json]
Preparing JSON for import... (may take a minute)
Killed
"
It does not provide me with any more information. I testing this multiple times on a 5 MB file and had no issues.
The CLI's documentation states that this tool has been tested up until 400MB, so I do not think that this is a size issue. However, like I said, the only difference between the file that fails to upload and the file that uploads is the size.
Has anyone seen anything like this? Does anyone have any suggestions for diagnosing this? Thank you.
I have searched the web and SO for any similar questions but found none.
firebase-import --database_url [my url] --path [my path] --json [smaller file works here but larger doesn't] --service_account [path]
Expected: An upload progress bar followed by my data being visible on the firebase GUI.
Actual Result: A simple "Killed" with no information as to why.
As it turns out, this is a memory limit issue. Since I am using an EC2 instance with low memory, preparing the json took up more space than available. Uploading via the GUI works fine for the 50MB file.
I have created a Rackspace account earlier today for CDN to serve my Opencart images from Rackspace.
I have created a container where i will upload over 500,000 images, but prefer to upload them as a compressed file, feels more flexible.
If i upload all the images in a compressed file how do i extract the file when it is in the container? and what compression type files would work?
The answer may depend on how you are attempting to upload your file/files. Since this was not specified, I will answer your question using the CLI from a *nix environment.
Answer to your question (using curl)
Using curl, you can upload a compressed file and have it extracted using the extract-archive feature.
$ tar cf archive.tar directory_to_be_archived
$ curl -i -XPUT -H'x-auth-token: AUTH_TOKEN' https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaa-aaa-aaa-aaa?extract-archive=tar -T ./archive.tar
You can find the documentation for this feature here: http://docs.rackspace.com/files/api/v1/cf-devguide/content/Extract_Archive-d1e2338.html
Recommended solution (using Swiftly)
Uploading and extracting that many objects using the above method might take a long time to complete. Additionally if there is a network interruption during that time, you will have to start over from the beginning.
I would recommend instead using a tool like Swiftly, which will allow you to concurrently upload your files. This way if there is a problem during the upload, you don't have to re-upload objects that have alreaady been successfully uploaded.
An example of how to do this is as follows:
$ swiftly --auth-url="https://identity.api.rackspacecloud.com/v2.0" \
--auth-user="{username}" --auth-key="{api_key}" --region="DFW" \
--concurrency=10 put container_name -i images/
If there is a network interruption while uploading, or you have to stop/restart uploading your files, you can add the "--different" option after the 'put' in the above command. This will tell Swiftly to HEAD the object first and only upload if the time or size of the local file does not match its corresponding object, skipping objects that have already been uploaded.
Swiftly can be found on github here: https://github.com/gholt/swiftly
There are other clients that possibly do the same things, but I know Swiftly works, so I recommend it.
My server has no available space left on disk. Yesterday, I deleted 200 GB Data, today it is full again. Some Process must write some files. How do I find out where possibly new huge files are stored?
Check df to check partition usage.
Use du to find sizes of folders.
I tend to do this:
du -sm /mount/point/* | sort -n
This gives you a list with the size of folders in MB in the /mount/point folder.
Also if you have X you can use baobab or similar utilies to explore disk usage.
PS: check the log files. For example if you have Tomcat installed it tends to generate crazy amount of log if not configured properly.
Using Classic ASP (stop tutting), I need to build an application that transfers high resolution photos from one server to another, around 360,000 including the thumbnails to be exact. The application will be called via a Windows schedule and will run as a background process.
What is the best way to achieve this, keeping performance in-mind? The last time I built a monster script like this was transferring and converting database tables for over one million rows, the application started really fast, but then after 25,000 records it went really, really slow! So I want to avoid this.
Obviously it will be a cross-domain transfer, so I was thinking about using an ASP/FTP component, and one-by-one, grab a file, send it, and record its success in a DB table so it knows what is has done so far.
Is it best to process one file at a time and refresh, so it doesn't abuse the server's resources, or should I process 1000 at a time, or more? I want it to be as quick as possible but without clogging up the server.
Any help/suggestions would be gratefully received.
I think is best to do one file at a time because if the connection goes down for a brief period of time you don't lost the files that you have already sent.
Even when you are using ASP Classic you can take advantage of .net for uploading the files using the FTP client classes in .net and avoid purchasing/installing a third party component. Surely .net is already installed on the server.
My process will look like this:
Upload 1 file using FTP (better performance)
If successful call an ASP page that records the action in the remote DB
Wait a second and retry up to 3 times if error uploading
Proceed to next file
If the process is clogging the server, you can put a brief pause between each upload.
i have something like that running in Classic ASP, it handles tenthousands of images without problem.
On the server that houses the images I run a (vbs)script that for each image
Makes a text-file with metadata
Makes a thumbnail and a mid-sized image copy on the second (web)server
The script runs continuously and only checks per folder and file if the files are present on the webserver and if not creates them, No need for a DB.
Between every check It sleeps a second. Like that the load on the server is only 2%. I use iPhoto in command-line modus to extract the metadata and images but you could use a library for that.
So these three files are stored on the webserver in a copy of the mapstructure from the first server but without de full-sized images.
On the webserver you only need to be able to browse the thumbnails and visualize the metadata and mid-size images.
If the user needs the full-size image he clicks the mid-sized which has as url the file on the first server.
Upload all the files via FTP
Create a CSV file with all your data
Pull it into the DB in one go
The amount of network handshake over 360,000 individual transactions would be the bottleneck.