After adding keys, I can write to the S3 bucket but to read, does it require the S3 bucket to be publicly available to all?
Followed instructions from:
https://github.com/ant-media/Ant-Media-Server/wiki/Amazon-(AWS)-S3-Integration
Error when the bucket has the same permission provided in the wiki.
Related
I have installed Firebase extension Delete User Data which triggers of user deletion. I have mentioned the storage paths as per the provided instructions.
Where in Google Cloud Storage do you store user data? Leave empty if
you don’t use Cloud Storage. Enter the full paths to files or
directories in your Storage buckets, separated by commas. Use {UID} to
represent the User ID of the deleted user, and use {DEFAULT} to
represent your default Storage bucket. Here’s a series of examples. To
delete all the files in your default bucket with the file naming
scheme {UID}-pic.png, enter {DEFAULT}/{UID}-pic.png. To also delete
all the files in another bucket called my-app-logs with the file
naming scheme {UID}-logs.txt, enter
{DEFAULT}/{UID}-pic.png,my-app-logs/{UID}-logs.txt. To also delete a
User ID-labeled directory and all its files (like media/{UID}), enter
{DEFAULT}/{UID}-pic.png,my-app-logs/{UID}-logs.txt,{DEFAULT}/media/{UID}
Cloud Storage Paths: {DEFAULT}/profilepic/{UID}.jpeg
I am always getting the following log
File: 'profilepic/uid_1234.jpeg' does not exist in Cloud Storage, skipping
It has been report in the github issues for firebase extension. For Extension version: 0.1.7 check the temporary fix by using your storage bucket name instead of DEFAULT.
To find your bucket name:
Go to your Storage dashboard in the Firebase console.
Click the Files tab, then look in the header of the file viewer.
Copy the URL to your clipboard. It's usually in the form project-id.appspot.com.
Instead of
{DEFAULT}/profilePhotos/{UID}.jpg
use:
your-project.appspot.com/profilePhotos/{UID}.jpg
where your-project.appspot.com is your bucket name.
I am not sure the latest extension 0.1.8 fixed the issue
Source
I'm trying to implement a system that allows react-native clients to upload files to a specific folder in Cloud Storage and allows clients to download from them. I can't do this directly from the client because I first need to query Firestore to validate that the user is 'friends' with the owner of the folder which will allow for read/write permissions.
I decided to use Cloud Functions as a middle layer to encapsulate this business logic and I expected to also be able to use it as a middle layer to upload the files. However, I feel like I may be misunderstanding how to best use these services together to solve this problem.
Current Plan:
Client uploads file to Cloud Function (assuming they are permitted after Cloud Function queries Firestore and validates)
Cloud Function uploads file to Cloud Storage
Client can then request file from Cloud Function, which validates permissions using Firestore and downloads file from CloudStorage
Cloud Function sends file to client
Questions:
Can/Should I use Cloud Functions in this way as a middle layer to upload files after validating permissions store in Firestore?
Is there an alternative solution using firebase that would mitigate the 10MB download limit with Cloud Functions but still allow me to authenticate uploads/downloads to and from Cloud Storage using my custom business logic on relationships in Firestore?
Any help or guidance here is appreciated! I'm very new to firebase, cloud architecture, and architecture in general.
This is definitely a valid and technically feasible approach. Whether you should do it, only you can determine of course.
An alternative is to use Firebase Storage's security rules to enforce the access control on the files. But you won't be able to read anything from Firestore in those rules, so you'll have to ensure everything needed to grant/deny access is in the path of the requested file, in the file's metadata, and/or in the user's token.
Even if you implement the download of files in Cloud Functions, you can still perform uploads through Firebase. You could in that case for example have the user write to a temporary location, which then triggers a Cloud Function that can do whatever additional checks you want on the file.
I'm trying to initialize our sdk following the instructions here:
https://firebase.google.com/docs/admin/setup#initialize-sdk
Basically, I created a service account and stored the accompanying json for that service account in cloud storage. Great. Now the example says I should reference that Json by:
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the
file path of the JSON file that contains your service account key
But, we aren't storing the file local to these cloud functions, we are storing it in cloud storage. How do we specify a path to a non-local file?
The Admin SDK does not support remote configurations. It needs to be local, either on disk or in memory. You will have to write code to download the config from the storage bucket somehow, then feed that to the SDK.
In order to initialize the Admin SDK locally (not deployed) you need to download the service account json file and make it accessible by:
Setting an env variable GOOGLE_APPLICATION_CREDENTIALS set to the path "/home/user/serviceaccount.json".
Include the service account file in the functions file and access it via relative path programmatically.
Once deployed the there is a runtime service account PROJECT_ID#appspot.gserviceaccount.com and the environment variables are already set.
It's really simple: I'm manually uploading files to firebase storage (some pictures that I want to use in my app). I need the public http address, but all I can find there is this type of link gs://myapp.appspot.com/logo3.png. How do I get from that to a URL that I can actually use in my browser?
Cloud Storage buckets do not have publicly accessible URLs by default. You have at least two options to get one:
Write some code in your app to get a download URL for the content. I've linked to the instructions for JavaScript, since you haven't indicated the client platform you're working with.
If you're just trying to get a static URL without calling an API, you will have to use the Google Cloud console to mark the entire storage bucket as "public", then build URLs to the content as described in the documentation.
I have a shiny-server app deployed on an ec2 AWS instance.
This app uses the library aws.s3 to perform reading/writing operation to s3 bucket.
The problem is, due to company policy reasons, i should use MFA authentication on the aws IAM users.
If i add the MFA authentication to the user used in the shiny-server instance, these will fail to download/upload data to the bucket s3 (permission denied)
R Code to read s3 bucket:
Sys.setenv("AWS_ACCESS_KEY_ID" = "ACCESSKEY",
"AWS_SECRET_ACCESS_KEY" = "SECRETACCESSKEY",
"AWS_DEFAULT_REGION" = "REGION",
"AWS_SESSION_TOKEN" = "")
aws.s3::s3read_using(FUN, trim_ws = TRUE, object = "myobject")
Is there some ways to download/upload s3 files through R, i can use other methods than this one but i can't change the iam policy.
You can improve on your approach here. You should not be using IAM Users to access S3 from EC2 instances, so there should not be a need for 2-factor authentication in the first place.
When accessing AWS Services, you should try to look for IAM Roles rather than IAM users, wherever possible. You can read more about different identities in the official docs here.
Among other things, AWS IAM Roles are automatically rotated behind the scenes and you should not be required to maintain or pass in AWS users credentials anywhere. This means the credentials are short lives and that reduces the impact in case they are compromised.
You can refer to this guide from AWS Knowledge Center for steps to get this up and running.