Custom domain name with SSL on Firebase Storage - firebase

I was able to get a custom domain name mapped to my Firebase Storage bucket by simply naming the bucket the same name as my domain name and then pointing the CNAME record to c.storage.googleapis.com. However, https doesn't work because the common name on the certificate is different. Is it possible for me to upload a certificate or, even better, have GCP or Firebase manage a certificate?

I'm coming a bit late to the party and this question might have been answered elsewhere. However, since this was the first result I found when googling for this feature, here goes nothing:
For starters, let's say you have a CNAME like assets.somedomain.com pointing to c.storage.googleapis.com, and you create a bucket called assets.somedomain.com.
Then you upload a file, whose public url will look like:
https://firebasestorage.googleapis.com/v0/b/assets.somedomain.com/o/arduino.png?alt=media&token=asdf
Which can be seen as:
firebasestorage.googleapis.com/v0/b/
+
assets.somedomain.com
+
/o/
+
arduino.png?alt=media&token=asdf
You should be able to view said file using:
https://assets.somedomain.com/arduino.png?alt=media&token=asdf
Which is
assets.somedomain.com/
+
arduino.png?alt=media&token=asdf
(basically, you strip the original base URL and the /o/ prefix)
But of course you get a big fat warning telling you the certificate is invalid, because it's meant for *.storage.googleapis.com.
In my case, I was able to circumvent this using cloudflare's universal SSL, which acts like a proxy that asks no questions whatsoever.
You try again, but somewhere in the middle the request becomes anonymous and you get an XML stating that you lack the storage.objects.get permission.
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
Anonymous users does not have storage.objects.get access to object.
</Details>
</Error>
This means that even with the token included in the query string the proxyed request has no permission. Next step, then, is to make the bucket publicly readable in Google Cloud Console -> Storage.
(This can be done using gcloud cli, but I found this method easier to explain)
Pay attention to use the legacy object reader permission, which stops visitors from actually listing the bucket contents.
After that, you should be able to access the image using:
https://assets.somedomain.com/arduino.png
Note that you don't even need to include "alt=media" because cloudflare will serve the file instead of its metadata.

Currently we don't support custom domains in Cloud Storage for Firebase.
You have two options:
Use Firebase Hosting (developer generated content)
Set this up via GCS static hosting (docs)
In either case though, you'll lose the ability to use the Firebase SDKs for Cloud Storage, as well as it's authentication and authorization functionality.
Happy to learn more about the use case to see if it's something we should support in the future.

Update April 2021
Firebase 8.4.0 introduces storage().useEmulator(host, port).
You'll still need a reverse proxy, which you can do with Google Cloud Load Balancer or others.

It's actually quite simple to achieve what you need - i.e to serve your storage content under your custom domain with SSL support. But you'd take a bit different approach.
They key here is that, as I was once prompted by firebase support, storage api is meant for internal usage of a developer, including the urls that point to files and they are not meant to be exposed to end users. That sounded kind of strange to me at first, but after I gave it a bit of though it started to make sense.
So here is how I solved it using the updated perspective.
You can create a dedicated endpoint which redirects to a cloud function.
That endpoint would accept a storage url as a parameter.
Then the cloud function would read the url and just stream its content back.
That's it.
No need for complex proxies setup etc. All your content will now be served under your custom domain.
Here is a brief example of how the core logic of such a function may look like:
(req, res, next) => {
let link = req.query.url
const https = require('https');
//request the content of the link
https.get(link, response => {
if (response.statusCode < 200 || response.statusCode > 299) {
//handle error
} else {
//stream the content back to client
response.pipe(res)
res.on("close", () => response.destroy())
}
});
}
Now you can do something like this (assuming your function is hosted under 'storage/content'):
let contentUrl = https://my-custon-domain.com/storage/content?url={{put your storage url that refers to a file}}
and then for example assign that url to the iframe src
<iframe :src="contentUrl"/> //this is how one will do it in Vue.js
Opening such a link in a browser will display your file content (or download it depending on the browser's settings)
I'll post a more detailed explanation with examples if this answer gets more attention.

I perfectly agree to the previous answer and thanks a lot for that. But I am writing the instruction in a better fashion
Create a bucket with you custom domain name in google cloud platform-> Storage.
Create a permission of legacy object viewer and add it to all users.Note:you have to search legacy object viewer from the filter text
Add a DNS record in your domain service provider account with CNAME assets which will point to c.storage.googleapis.com.
Create a cloudflare account if you do not have
Add website in cloudflre where you need to put your domain name not the subdomain
Copy the nameserver details from cloudflare to your DNS service providers nameserver details
It will take some time to move all the dns records in cloudflare.
Goto page rules in cloudflare and add assets.yourdomain.com and turn on always use https
You are done

For GCloud users,
Just go to console,
Open Load Balancing
Provide an alias and handle your mapping assets.yourdomain.com
points to */images/
It will create a new balancer with ip address and it is
multi-regional don't worry.
Open Cloud CDN, give alias and select your created balancer.
Select your bucket name which is your firebase-storage's bucket
name.
Go to your domain provider like GoDaddy and put this ip address which points
assets.yourdomain.com to balancer's ip.
To sum up;
Google is handling certification progress and it gives you an ip, you add A record which points to given ip.
When you visit assets.yourdomain.com it goes to Google and Google points to your bucket.
It takes 5mins to complete but I have spent 1 week to understand how does it work :)

Using Firebase Storage means you are using a GCP Cloud Storage Bucket.
Using GCP Load Balancing feature, you can basically expose your GCP Storage Bucket to a public IPv4. And manage SSL certificates.
Then, you go to your domain provider console and add an "A record" to your Bucket IP.
Here is a great post : https://deliciousbrains.com/wp-offload-media/doc/how-to-set-up-a-custom-domain-cdn-for-google-cloud-storage/
GCP = Google CLoud Platform
Note that GCP Load Balancing is not free.

Related

how to secure AWS S3 bucket to access from a custom domain (website)?

I am trying to secure our AWS S3 bucket to be accessed only from our WordPress website. I have tried to implement this using a blog How to restrict s3 bucket for specific domain name? - Eternal Blog but the problem is not solved, and policies are not working as intended.
Using referer to limit traffic is not a reliable security mechanism because it can easily be faked when sending an HTTP request.
There is no way to guarantee that content is only "accessed from a custom domain".
The 'correct' way to secure content in Amazon S3 is to have users authenticate to a back-end app, and then serve content via Amazon S3 pre-signed URLs, which provide time-limited access to private objects in Amazon S3.
Answer:
The way described in that blog is working fine, make sure you create a new bucket and go that way. You can skip the CORS section inside that tutorial.
The issue in mine is that in the previous bucket which was created by my client and he added some rules in the bucket which were not let me do the intended tasks so I just created a new bucket and then followed that blog.
Thanks

AWS Web ACL rule: alternatives to Referer

I am looking for a way to limit access to AWS S3 hosted data in a controlled and at least semi-secure way. I have various resources in a number of S3 buckets, with CloudFront as CDN. I then have a WordPress based website using a theme that allows me to sell "courses". Finally I manage my domains so I can create a sub domain for the content download link, i.e. content.domainname.com.
Ideally I want to limit access to content to a specific set of courses, so only people who have bought the course, and are linking to the content from a web page in that course, can (easily) get at the data.
I know I can use an AWS Web ACL rule to check the referer, to limit downloads to links on my domain. And I think I can expand on that to test more of the URL, so in www.domainname.com/paid/coursename/page.html I could have a rule that tests for the bold portion of the path and refuses otherwise.
However, I also know that referer can be easily spoofed, and more importantly some browsers and internet security software will replace the referer, and I don't want my site security to force customers to change their security settings. So, is there another option, to include some sort of data in the HTTP request, that limits access in a way that is both somewhat secure, but not dependent on a client side settings? Perhaps something like a hash that I could include in the link itself? Or, maybe the WordPress API and AWS Web ACL Rules can communicate is some way so as to validate the logged on user has membership in the course? Grasping at straws here I suspect.
Additionally, there will be a PowerShell script that can be downloaded and run, which will access downloadable content as well. Again, I want to limit access, but in this case I need to be able to maintain the criteria on AWS as I have subscription and non subscription versions of the courses, and the PS script should only download for customers on subscription. So, I could provide the PS script with something like a customer ID, then maintain a list of customer IDs that are currently on subscription so the Web ACL rule could filter. But again, I suspect that HTTP header won't get the job done, because it could be changed by internet security at the customer location. But now I am limited by what PowerShell can do with regards to HTTP requests.
I know, rather an open ended question, but hopefully someone can at least point me in the right direction. It sure seems like both needs are something that AWS should be able to do, I am just so out of my depth here I don't know where to start, and AWS documentation requires that you have some clue to get you going.

How to hide Google Storage bucket path and image name from URL

I'm using Google Storage to store profile pictures of my users. I have couple of thousands pictures.
Now the pictures are being saved in a bucket like so:
data/images/profiles/USER_ID.jpg
So the URL to an image:
https://storage.cloud.google.com/data/images/profiles/USER_ID.jpg
I don't want users being able to see someone else picture by just knowing their USER_ID, and still, It has to be the USER_ID for easier search from a developer's side.
I can't use Signed URL as my users do not have a google account, and the pictures from the storage are fetched from a Mobile Application.
Is there a way to keep the file names as they are in the storage, but simple hide the path+filename from the URL?
https://storage.cloud.google.com/fc720d5c05411b03e5e2a6692f8d7d61.jpg -> points to https://storage.cloud.google.com/data/images/profiles/USER_ID.jpg
Thank You
You have several options. Here are a few:
Have users request the URL for another user from the server, then have the server decide whether or not the user is allowed to see the image. If so, have the server (which does have a service account) generate a signed URL and pass it back to the user (or redirect to it). This way, although the user may know the user ID of another user and the URL of their image, they still can't see the image unless the server agrees that this is okay.
Use Firebase Storage to manage the images, which will still store them in GCS but will give you Firebase's auth support.
Proxy the images through your app, either an app engine app or something running in GCE or GKE. This lets you hide everything about the source of the image, including the user ID, but has the downside of requiring all of the data to pass through your service.
Reexamine your requirements. "Easier search on the developer's side" may not be as important as you think, and you need to way the benefit of that vs the cost of working around it.
Another option is Google Images API available on AppEngine. You can link your Cloud Storage objects with Google Images API and use benefits of this API - secure URLs, transform and resize images using URL parameters.
You only need to prepare servingURL for every image stored in GCS and persist this serving URL (for example in Google Datastore)
ImagesService imagesService = ImagesServiceFactory.getImagesService();
ServingUrlOptions suo = ServingUrlOptions.Builder
.withGoogleStorageFileName(gcsImageObjectPath)
.secureUrl(true);
String servingUrl = imagesService.getServingUrl(suo);

How to hide the Firebase Storage download URL from the network tab of browsers?

I'm leveraging Firebase Authentication for downloading images from firebase storage. I'm also leveraging google API HTTP referrers for blockage by domain so that my image from firebase storage is only accessed from my website. But when I go to the network tab of my browser I can see the download URL of the image. By this, anyone can download my image and use it. What should I do so that my images are secured?
P.S: I'm using the firebase storage SDK and by following the documentation when I execute this code below
storageRef.child('images/stars.jpg').getDownloadURL().then(function(url) {
// `url` is the download URL for 'images/stars.jpg'
var img = document.getElementById('myimg');
img.src = url;
}).catch(function(error) {
// Handle any errors
});
I can see the download URL in the network tab of my browser.
You can't. When you give up access to a Cloud Storage download URL to any one, in any way, you are implicitly trusting that user to its access. They are free to share it with anyone they want. If you don't trust that user, then don't give them the URL.
If you don't like the way this works, then don't use download URLs, and allow only secure downloads via the Firebase SDK. At that point, you are trusting the user they will not take the content and upload it elsewhere and generate a URL to it.
You seem to have two options as far as I can tell. Unfortunately, they are basically one in the same effectively as you will probably have to implement both.
The first option is to revoke the access token on individual files you don't want to be allowed to download. Unfortunately, this also means that you can't display them anywhere you currently do via the URL as it breaks that link. See this answer for why that is a pain to do.
The second option is to use storage references to download them client side, but this only works if you are using Firebase SDK's in a web app and not a simple static website. I think this shouldn't expose the URL on the network tab of the browser if the app is set up correctly.
You can implement the second option without the first and the URL shouldn't be exposed, but you can't use the url anymore and have to use both options if you implement the first one... :/ meh... firebase is great, but nothing is perfect
This seems to work, I'll update if it doesn't
Edit: "However, the CORS configuration applies only to XML API requests," which one can just go to the file still.. https://cloud.google.com/storage/docs/cross-origin
GCP console >_
pencil icon > create cors.json [{"origin":["https://yourorigin1.com"],"method":["GET"],"maxAgeSeconds":3600}]
go back to shell and enter gsutil cors set cors.json gs://yourproject1.appspot.com
https://stackoverflow.com/a/58613527/11711280
Workaround:
I will make all rules resource.data.authorId, resource.data.members, etc. I need to match the request.auth.uid (or control calls in client code to non-anonymous uid's), and sign-in every user anonymously, at first. Then, uid will not be null when using a firebase initialized from our domain

What is the origin of an external API call made by a firebase function?

I'm trying to make a call to an external api using a cloud function.
The external API requires me to register the origin of the call I am making with them.
For example https://mywebsite.com
What would the url to register with them be?
mywebsite.firebaseapp.com?
The domain name registered in firebase console, mywebsite.com?
Or something else like https://us-central1-mywebsite.cloudfunctions.net/functionName ?
When working with massively scalable cloud products like Cloud Functions, you don't have guarantees about where your network traffic appears to come from. Your source IP can (and will) change over time, and addresses of your project's DNS entries (for you cloudfunctions.net hostname) can be expected to change similarly.

Resources