Is it possible to download images to users local machines directly via Firebase functions? How to do it in case that:
Images are stored in Firebase storage.
Images are stored on other cloud storage providers (I can access them with url).
I don't want to download those images via url links so that I don't reveal the url the image is located on.
Is it possible to download images to users local machines directly via Firebase functions?
No, it's not possible. The client must reach out to the server in order to download content. The content can't be "pushed" to the client without its authorization. That would be a huge security hole for the client.
This is why download URLs exist - to give the client something to download via a normal HTTP request.
You can create a presigned URL using the Google APIs library. The Firebase bucket is just a regular GCS bucket. Something like this:
const admin = getFirebaseAdmin();
let bucket = admin.storage().bucket(firebaseConfig.storageBucket);
const f = bucket.file(location);
if (!(await f.exists())) {
throw logError(`No file found at specified location: ${location}`, functionName)
}
const url1 = await f.getSignedUrl({
action: 'read',
expires: new Date((new Date).getTime() + (24 * 60) * 60000) // expires in 24 hours
});
const url = url1[0];
return url;
Related
So I want to have a nuxt site hosted on Netlify where there's a child route whos slug is a firebase firestore document id.
Example:
https://www.example.com/users/steve
(where "steve" is the documentid)
So when the route is hit I would need to query firebase to see if it exists, and if not I would have to return a 404. Is this even possible? I can do it easy in .net or php, but I'm very unsure of a SPA.
Specifically what should I be looking for in the docs, if I can do this?
One solution is to implement an HTTPS Cloud Function that you would call like a REST API, sending an HTTP GET request to the functions endpoint.
As explained in the doc "Used as arguments for onRequest(), the Request object gives you access to the properties of the HTTP request sent by the client".
So you Cloud Function would look like:
exports.getUser = functions.https.onRequest((req, res) => {
// get the value of the user by parsing the url
const baseUrl = req.baseUrl;
//Extract the user from baseUrl
const user = ....
//query the Firestore database
admin.firestore().collection('users').doc(user).get()
.then(doc => {
if (doc.exists) {
res.status(200).end();
} else {
res.status(404).end();
}
});
See the get started page and the video series for more info on Cloud Functions.
Note that you can connect an HTTP function to Firebase Hosting, in such a way that "requests on your Firebase Hosting site can be proxied to specific HTTP functions".
I have a firebase function to upload files to firebase storage, after upload I have to return the url (as Reset response) so that user can view the file
const bucket = admin.storage().bucket();
const [file, meta] = await bucket.upload(tempLocalFile, {
destination: uploadPath,
resumable: false,
public: true,
});
I have two options
1- const signedUrl = await file.getSignedUrl({ action: 'read', expires: '03-09-2491' });
2- meta.mediaLink
SignedUrl will be like https://storage.googleapis.com/web-scanner-dev.appspot.com/pwc%2Fwww.x.com%2F2019-11-17%2Fdesktop%2Fscreenshot-2019-11-17-1125.png?GoogleAccessId=firebase-gcloud%40scanner-dev.iam.gserviceaccount.com&Expires=16447035600&Signature=w49DJpGU9%2BnT7nlpCiJRgfAc98x4i2I%2FiP5UjQipZQGweXmTCl9n%2FnGWmPivkYHJNvkC7Ilgxfxc558%2F%2BuWWJ2pflsDY9HJ%2Bnm6TbwCrsmoVH56nuGZHJ7ggp9c3jSiGmQj3lOvxXfwMHXcWBtvcBaVj%2BH2H8uhxOtJoJOXj%2BOq3EC7XH8hamLY8dUbUkTRtaWPB9mlLUZ78soZ1mwI%2FY8DqLFwb75iob4zwwnDZe16yNnr4nApMDS7BYPxh4cAPSiokq30hPR8RUSNTn2GxpRom5ZiiI8dV4w%2BxYZ0DvdJxn%2FW83kqnjx6RSdZ%2B9S3P9yuND3qieAQ%3D%3D
and mediaLink will be like https://storage.googleapis.com/download/storage/v1/b/web-scanner-dev.appspot.com/o/pwc%2Fwww.x.com%2F2019-11-17%2Fdesktop%2Fscreenshot-2019-11-17-1125.png?generation=1574007908157173&alt=media
What is the pros and cons of each?
The mediaLink does not convey any access permissions on its own -- thus, the object itself will need to be publicly readable in order for end uers to make use of the link (or you will need to be authenticated as an account with read access to that bucket when you execute the link).
On the other hand, a URL returned by getSignedUrl will have a signature that allows access for as long as the URL hasn't expired. Thus, the link alone is sufficient (if temporary) permission to access the blob. Additionally, the URL that is generated maintains the permissions of the user who created it -- if that user loses access to the blob before the link would otherwise expire, the link will no longer function.
Example:
upload file to server and save resulting path to the database, only authenticated users should be able to upload files
How to implement this?
to summarize we have 3 ways:
client uploads to s3 (or similar service), get's file url, then makes insert/update mutation to the right table
custom uploader - write application/server that uploads files and mutates db and use nginx routing to redirect some requests to it
custom resolver using schema stitching (example)
If you are uploading files to AWS S3, there is a simple way that you don't have to launch another server to process file upload or create a handler for hasura action.
Basically, when you upload files to S3, it's better to get signed url from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.
The critical point is how to get s3 signed url to upload.
In node.js, you can do
const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl);
A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch, and add some logic for authorization and even add a row to table.
You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?
After digging into AWS JS SDK, we can find signature is computed here.
return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'
It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.
So if you have a table "files"
CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
);
you can create a SQL function
CREATE OR REPLACE FUNCTION public.file_signed_url(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$
Finally, follow this to expose this computed field to Hasura.
This way allows you to be able to not add any backend stuff and handle permission all in Hasura.
I fetched an item from my Firebase storage bucket via this technique (generally):
const url = await firebase.storage().ref('my/ref').getDownloadURL();
const filename = 'filename.ext';
const a = document.getElementById('link');
a.href = url;
a.download = filename;
a.click();
I did it the above way prior to trying the example from the docs:
storageRef.child('images/stars.jpg').getDownloadURL().then(function(url) {
// `url` is the download URL for 'images/stars.jpg'
// This can be downloaded directly:
var xhr = new XMLHttpRequest();
xhr.responseType = 'blob';
xhr.onload = function(event) {
var blob = xhr.response;
};
xhr.open('GET', url);
xhr.send();
});
When trying it this way, I hit the CORS error. After adding the CORS config to my bucket, it then worked as expected. However, I cannot determine why I was able to successfully fetch it via the first technique prior to configuring CORS.
I tested it again by removing the GET method from my CORS config and uploading the config file again via gsutil. I was still able to successfully obtain the file via the first technique described above.
If this is possible to do without configuring CORS, how can I prevent it to restrict access? Odds are no one will be able to figure out the required ref to build the link, anyways, because the actual ref has multiple unique IDs that will be all but impossible to figure out. This is mainly a question out of curiosity.
I cannot determine why I was able to successfully fetch it via the first technique prior to configuring CORS.
Because same-origin policy doesn't apply when the Javascript can't access the data. In your first example, the JS tweaks the document and the document accesses the data. In the second example, the JS accesses the cross-origin data, and the absence of CORS prevents such access.
If this is possible to do without configuring CORS, how can I prevent it to restrict access?
CORS isn't designed to restrict access. (Wait, what?) CORS is designed to permit access that would otherwise be assumed to be something the user would not want -- for scripts on one page to have access to data from another origin, including, potentially, handing over use of the user's credentials to scripts on the current page when accessing the foreign site. CORS allows site B to tell the browser that it expects to be contacted by scripts from site A, and therefore such access should not be unexpected or assumed unauthorized. It has no impact on requests that don't fall under the same origin policy.
The solution -- and I apologize if I am stating the patently obvious -- is that getDownloadUrl() should not be able to fetch a usable URL for the object, if the object should not in fact be accessible. You can't trust code running on the browser, so whatever credentials are in play here should not be able to be used in this way, if the object is not intended to be accessible... otherwise you have a misconfiguration that is allowing access that should not be allowed.
I have an ASP.NET Core web app and I'm using Angular 4. There are a lot of resources showing how to upload a file to S3, which I've done. But there doesn't seem to be anything about reading the file.
I want to give users the ability to upload a JSON file, save it to S3, then on a different view show the user all of the files they've uploaded as well as display the content of the file.
Are there any resources for showing how to do this?
If its publicaly available items, you can use the S3 JavaScript SDKs 'getObject' method to download a file.
/* The following example retrieves an object for an S3 bucket. */
var params = {
Bucket: "examplebucket",
Key: "HappyFace.jpg"
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
AcceptRanges: "bytes",
ContentLength: 3191,
ContentType: "image/jpeg",
ETag: "\"6805f2cfc46c0f04559748bb039d69ae\"",
LastModified: <Date Representation>,
Metadata: {
},
TagCount: 2,
VersionId: "null"
}
*/
});
If the files are private, use S3 Signed Urls or CloudFront Signed Urls( or Cookies) to generate a download Url from your backend after authorizing the user. Using this download url, download the file from S3 directly in angular app.
Examples:
Using CloudFront signed urls.
Using S3 Signed Url using the S3 SDKs getSignedUrl method.
Another option is to generate S3 temporary access credentials from AWS STS directly from your backend and send back to the Angular app or using a authentication service such as AWS Cognito so that Angular app can use it to invoke the S3 SDK.