How to upload files or images on hasura graphql engine - hasura

Example:
upload file to server and save resulting path to the database, only authenticated users should be able to upload files
How to implement this?

to summarize we have 3 ways:
client uploads to s3 (or similar service), get's file url, then makes insert/update mutation to the right table
custom uploader - write application/server that uploads files and mutates db and use nginx routing to redirect some requests to it
custom resolver using schema stitching (example)

If you are uploading files to AWS S3, there is a simple way that you don't have to launch another server to process file upload or create a handler for hasura action.
Basically, when you upload files to S3, it's better to get signed url from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.
The critical point is how to get s3 signed url to upload.
In node.js, you can do
const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl);
A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch, and add some logic for authorization and even add a row to table.
You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?
After digging into AWS JS SDK, we can find signature is computed here.
return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'
It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.
So if you have a table "files"
CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
);
you can create a SQL function
CREATE OR REPLACE FUNCTION public.file_signed_url(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$
Finally, follow this to expose this computed field to Hasura.
This way allows you to be able to not add any backend stuff and handle permission all in Hasura.

Related

Download images to users local machines via Firebase Functions

Is it possible to download images to users local machines directly via Firebase functions? How to do it in case that:
Images are stored in Firebase storage.
Images are stored on other cloud storage providers (I can access them with url).
I don't want to download those images via url links so that I don't reveal the url the image is located on.
Is it possible to download images to users local machines directly via Firebase functions?
No, it's not possible. The client must reach out to the server in order to download content. The content can't be "pushed" to the client without its authorization. That would be a huge security hole for the client.
This is why download URLs exist - to give the client something to download via a normal HTTP request.
You can create a presigned URL using the Google APIs library. The Firebase bucket is just a regular GCS bucket. Something like this:
const admin = getFirebaseAdmin();
let bucket = admin.storage().bucket(firebaseConfig.storageBucket);
const f = bucket.file(location);
if (!(await f.exists())) {
throw logError(`No file found at specified location: ${location}`, functionName)
}
const url1 = await f.getSignedUrl({
action: 'read',
expires: new Date((new Date).getTime() + (24 * 60) * 60000) // expires in 24 hours
});
const url = url1[0];
return url;

Send firebase storage authorization as url parameter from a flutter web app

I would like to know how to make an authorized request to firebase storage using the user Id Token as a parameter in the url. Right now with a firebase rule of 'request.auth != null' I receive a 403 network error (Failed to load video: You do not have permission to access the requested resource). Here is my GET request url:
https://firebasestorage.googleapis.com/v0/b/<bucket>/o/<folder_name>%2F<video_name>.mp4?alt=media&auth=eyJh...<ID TOKEN>...Ll2un8ng
-WITHOUT the firebase rule in place I'm able to successfully get the asset with this request url https://firebasestorage.googleapis.com/v0/b/<bucket>/o/<folder_name>%2F<video_name>.mp4?alt=media
-also tried token=, token_id=, tokenId=
-the reason for not using the firebase SDK to fetch the file is so that I can use the flutter video_player (https://pub.dev/packages/video_player#-example-tab-) package and use this with files in firebase, I mention this in case theres a better way to use the video_player library in flutter web right now:
_controller = VideoPlayerController.network(
'https://flutter.github.io/assets-for-api-docs/assets/videos/bee.mp4',
closedCaptionFile: _loadCaptions(),
);
[EDIT] It appears that it's not possible to pass the auth in as a query parameter. After some exploring, I found an acceptable way to still use the video_player with your firebase assets that are protected (If you're not using rules to protect them, you can directly use the firebase url). I will post some general steps here and some sample code:
Use the Storage Firebase SDK package to get the Uint8List, the uri given by getDownloadURL has the correct header auth, for example
import 'package:firebase/firebase.dart';
final url = await storagePath.getDownloadURL();
final response = await http.get(url);
if (response.statusCode == 200) {
return response.bodyBytes;
}
use the Uint8List buffer to init a Blob object which you'll use to then create an ObjectURL which basically gives you the same interface as a file url to use as the network url for your video player
final blob = html.Blob([data.buffer], 'video/mp4');
final videoUrl = html.Url.createObjectUrl(blob);
videoPlayerController = VideoPlayerController.network(
videoUrl)
..initialize().then((_) {...
That's it.
Firebase Storage REST does not (rightly) support authorization from GET query string as you are trying to do. Instead, it uses the standard Authorization header (see here).
Firebase cloud storage internally uses Google Cloud Storage. Mentioned here
If the library you use doesn't support HTTP headers yet, you must consider an alternative. The issue you mentioned in the comment shows that the feature is still under development, so you can also wait for the library to come out with the support for headers.
Internally all this package does for flutter-web is create an HtmlElementView widget here for which it passes a VideoElement (ref here) from the package dart:html with the provided URL which translates to a <Video> tag inside a shadow dom element in your web page. The error 403 could also mean you are trying to access it from a different origin.
I would suggest following approach.
Check your console for any CORS related errors. If yes, then you will have to whitelist your ip/domain in the firebase storage. Check this post for possible approach and more details here.
Check if you are able to access the URL directly with the authorization token as a query parameter as you suggested. If not then, it is not the correct way to access the object and should be corrected. You could update the question with the exact error details.

Get URL from firebase storage file

I want to get a downloadURL from a file inside my storage from within the onFinalize trigger. In a best case scenario, I want an URL as short as possible (so preferably not a signed one, but just one like the public one like it can be seen in the Firebase Storage UI). Keep in mind, that I am moving the file first, so I cannot access it directly from the onFinalize parameter.
I currently have the following solution:
await imageRef.move(newPath);
const newFile = defaultBucket.file(newPath);
const url = (await newFile.getSignedUrl({
action: 'read',
expires: '03-09-2491'
}))[0];
This approach has two flaws:
Apparently the signed URL is only valid for 3 days. This may be a known issue
The URL is very long and takes much space in my Firestore
I also saw an approach, where the URL is being reproduced from the bucket name and the token, but I did not manage to find the token in the metadata of the file.

cosmos db, generate authentication key on client for Azure Table endpoint

Cosmos DB, API Azure Tables, gives you 2 endpoints in the Overview blade
Document Endpoint
Azure Table Endpoint
An example of (1) is
https://myname.documents.azure.com/dbs/tempdb/colls
An example of (2) is
https://myname.table.cosmosdb.azure.com/FirstTestTable?$filter=PartitionKey%20eq%20'car'%20and%20RowKey%20eq%20'124'
You can create the authorization code for (1) on the client using the prerequest code from this Postman script: https://github.com/MicrosoftCSA/documentdb-postman-collection/blob/master/DocumentDB.postman_collection.json
Which will give you a code like this:
Authorization: type%3Dmaster%26ver%3D1.0%26sig%3DavFQkBscU...
This is useful for playing with the rest urls
For (2) the only code I could find to generate a code that works was on the server side and gives you a code like this:
Authorization: SharedKey myname:JXkSGZlcB1gX8Mjuu...
I had to get this out of Fiddler
My questions
(i) Can you generate a code for case (2) above on the client like you can for case (1)
(ii) Can you securely use Cosmos DB from the client?
If you go to the Azure Portal for a GA Table API account you won't see the document endpoint anymore. Instead only the Azure Table Endpoint is advertised (e.g. X.table.cosmosdb.azure.com). So we'll focus on that.
When using anything but direct mode with the .NET SDK, our existing SDKs when talking to X.table.cosmosdb.azure.com endpoint are using the SharedKey authentication scheme. There is also a SharedKeyLight scheme which should also work. Both are documented in https://learn.microsoft.com/en-us/rest/api/storageservices/authentication-for-the-azure-storage-services. Make sure you read the sections specifically on the Table Service. The thing to notice is that a SharedKey header is directly tied to the request it is associated with. So basically every request needs a unique header. This is useful for security because it means that a leaked header can only be used for a limited time to replay a specific request. It can't be used to authorize other requests. But of course that is exactly what you are trying to do.
An alternative is the SharedKeyLight header which is a bit easier to implement as it just requires a date and the a URL.
But we don't have externalized code libraries to really help with either.
But there is another solution that is much friendly to things like Fiddler or Postman, which is to use a SAS URL as defined in https://blogs.msdn.microsoft.com/windowsazurestorage/2012/06/12/introducing-table-sas-shared-access-signature-queue-sas-and-update-to-blob-sas/.
There are at least two ways to get a SAS token. One way is to generate one yourself. Here is some sample code to do that:
var connectionString = "DefaultEndpointsProtocol=https;AccountName=tableaccount;AccountKey=X;TableEndpoint=https://tableaccount.table.cosmosdb.azure.com:443/;";
var tableName = "ATable";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tableName);
await table.CreateIfNotExistsAsync();
SharedAccessTablePolicy policy = new SharedAccessTablePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1000),
Permissions = SharedAccessTablePermissions.Add
| SharedAccessTablePermissions.Query
| SharedAccessTablePermissions.Update
| SharedAccessTablePermissions.Delete
};
string sasToken = table.GetSharedAccessSignature(
policy, null, null, null, null, null);
This returns the query portion of the URL you will need to create a SAS URL.
Another, code free way, to get a SAS URL is to go to https://azure.microsoft.com/en-us/features/storage-explorer/ and download the Azure Storage Explorer. When you start it up it will show you the "Connect to Azure Storage" dialog. In that case:
Select "Use a connection string or a shared access signature URI" and click next
Select "Use a connection string" and paste in your connection string from the Azure Portal for your Azure Cosmos DB Table API account and click Next and then click Connect in the next dialog
In the Explorer pane on the left look for your account under "Storage Accounts" (NOT Cosmos DB Accounts (Preview)) and then click on Tables and then right click on the specific table you want to explore. In the right click dialog you will see an entry for "Get Shared Access Signature", click on that.
A new dialog titled "Generate Shared Access Signature" will show up. Unfortunately so will an error dialog complaining about "NotImplemented", you can ignore that. Just click OK on the error dialog.
Now you can choose how to configure your SAS, I usually just take the defaults since that gives the widest access permission. Now click Create.
The result will be a dialog with both a complete URL and a query string.
So now we can take that URL (or create it ourselves using the query output from the code) and create a fiddler request:
GET https://tableaccount.table.cosmosdb.azure.com/ATable?se=2018-01-12T05%3A22%3A00Z&sp=raud&sv=2017-04-17&tn=atable&sig=X&$filter=PartitionKey%20eq%20'Foo'%20and%20RowKey%20eq%20'bar' HTTP/1.1
User-Agent: Fiddler
Host: tableaccount.table.cosmosdb.azure.com
Accept: application/json;odata=nometadata
DataServiceVersion: 3.0
To make the request more interesting I added a $filter operation. This is an OData filter that lets us explore the content. Note, btw, to make filter work both the Accept and DataServiceVersion headers are needed. But you can use the base URL (e.g. without the filter parameter) to make any of the REST API calls on a specific table.
Do be aware that the SAS token is scoped to an individual table. So higher level operations won't work with this SAS token.

Paw Extensions : Dynamic Value based on URI

I have an API that includes an account ID as part of the url (e.g. /account/7319310/report) where 7319310 is then account ID.
There are different credentials for each account, stored in MySQL although they could be stored in another manner if it made it easier.
I'd like Paw to automatically use the correct credentials based on the account parameter in the URI (it's always the second element) - is this possible?
In paw you can use a regex Dynamic to extract the data you need from the url:
Paw does not have a direct connection to MySQL, you can make http request from a custom value but you would need a server running to push these request to the server. A better option would be to save the credentials into a flat json file.
{
"1234334": {
"key1": 123456,
"key2": 345211
}
}
With this saved you can load this json file in a Custom Dynamic Value:
Here you can embed the extracted user id by using the regex dynamic value. inline in the code. Paw will reload the file on every request so you could set up a cron job to dump your database to this JSON file.

Resources