Meteor CollectionFS, condition for each store? - meteor

Is it possible to add a condition for each FS.Store for Meteor CollectionFS?
I'm looking to do some sort of function to do a check before each FS.Store and if it fails, just don't upload at all.
For example, I'm trying to check if the uploading image is a certain size. If it isn't, I'd like to stop proceeding with the upload for that FS.Store.

CollectionFS is used to provide efficient way to upload from a file, an url, a blob etc on a storage such as data, GridFS, S3 etc…. The process of checking the data you will send to the server should be handled prior to uploading data to the Collection.
so as CollectionFS supports several kind of data, you might be able (or not) to filter the content before uploading them.
File object (client only) // YES
Blob object (client only) // YES
Uint8Array // YES
A data URI string // YES
A full URL that begins with "http:" or "https:" // NO, not applicable
A local filepath (server only) // NO (server only)
ArrayBuffer Buffer (server only) // NO (server only)
The reason is very simple, into your template you can enum on the file you are willing to upload:
Template.myForm.events({
'change .myFileInput': function(event, template) {
FS.Utility.eachFile(event, function(file) {
// Test here what you want to test on "file", return if the test failed
Images.insert(file, function (err, fileObj) {
});
});
}
});
If you are uploading from an URL, you obviously cannot check the content before you download the content…. If you are uploading a file, blob, data URL string, Uint8Array, the data being still on the client side, it's your job to analyse it and grant or deny the upload.

Related

ASP .net 6 download file by httpclient - problem with stream

I'm creating blazor server app. I use external file storage with RestAPI.
I want to create download button to get file from storage. This may seem easy, but not necessarily.
From file storage I download HttpContent like that
var request = new HttpRequestMessage(HttpMethod.Get, _url);
request.Headers.Add("auth-token", token);
request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/octet-stream"));
HttpResponseMessage response = await _Http.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
response.EnsureSuccessStatusCode();
var content = response.Content;
next I act like this tutorial https://learn.microsoft.com/en-us/aspnet/core/blazor/file-downloads?view=aspnetcore-6.0
var fileStream = content.ReadAsStream();
using (var streamRef = new DotNetStreamReference(fileStream))
{
await JS.InvokeVoidAsync("downloadFileFromStream", "file.txt", streamRef);
}
For small files everything work great. But if I try to download large file (100mb), algoritm firstable download file to memory(RAM) of server and later save on local disk of client.
In ideal world I dream that when I click button download, file from external storage will download after delay (with progressbar) like physical file (no stream buffer) form http server e.g. https://www.example.com/file.txt. Of course by my BlazorServer Application with authorization and authentication, and whole neccesery services.
I have solution.
Create Service to service File Storage API
Create controller to avoid cros-origin error
Use microsoft tutorial to create download button https://learn.microsoft.com/en-us/aspnet/core/blazor/file-downloads?view=aspnetcore-6.0

How to get uri to included asset in Uno Platform?

For integrating with a android binding project I need to provide a Uri pointing to a .zip file, which I have included in the assets and designated as AndroidAsset. How do I get the Uri for such a file?
I already tried ms-appx:///Assets/file.zip and file:///Assets/file.zip
Update:
Import to note is, that the function consuming the Uri is Android native code, so I suspect that ms-appx:// doesn't get resolved properly.
Update2:
It is not possible to provide a stream.
The method I am calling is shown in the sample here: https://github.com/Laerdal/Xamarin.Nordic.DFU.Android/blob/7244627c09e97e05ee2c8e05744f19055981486b/Sample/Nordic/FirmwareUpdater.cs#L27.
_dfuServiceInitiator.SetZip(firmwareZipFile);
The native implementation is shown here: https://github.com/NordicSemiconductor/Android-DFU-Library/blob/07bdaa50cfc5786790bf1ac589b14931de65d099/dfu/src/main/java/no/nordicsemi/android/dfu/DfuServiceInitiator.java#L620
public DfuServiceInitiator setZip(#NonNull final Uri uri) {
return init(uri, null, 0, DfuBaseService.TYPE_AUTO, DfuBaseService.MIME_TYPE_ZIP);
}
Files read from the StorageFile.GetFileFromApplicationUriAsync() method may not return a stable path on UWP, and this is particularly the case on particularly on Android or WebAssembly, where the file is not necessarily on the file system.
On Android, the file is a Stream directly built from the APK file, and on WebAssembly it is stored in a temporary location.
In order to use keep a stable copy of the file, use the following:
var file = await StorageFile.GetFileFromApplicationUriAsync(new System.Uri("ms-appx:///TextFile.txt"));
var newFile = await file.CopyAsync(Windows.Storage.ApplicationData.Current.LocalFolder, file.Name, NameCollisionOption.ReplaceExisting);
var txt = await FileIO.ReadTextAsync(newFile);
The method StorageFile.GetFileFromApplicationUriAsync can be used to get a StorageFile object from an ms-appx Uri.
Then you can use the Path property of the StoragFile to get the local android path. Make note to set the build action of the file as Content.
var file = await StorageFile.GetFileFromApplicationUriAsync(new System.Uri("ms-appx:///Assets/file.zip"));
Android.Net.Uri zipUri = Android.Net.Uri.Parse("file:///"+file.Path);

How to upload files or images on hasura graphql engine

Example:
upload file to server and save resulting path to the database, only authenticated users should be able to upload files
How to implement this?
to summarize we have 3 ways:
client uploads to s3 (or similar service), get's file url, then makes insert/update mutation to the right table
custom uploader - write application/server that uploads files and mutates db and use nginx routing to redirect some requests to it
custom resolver using schema stitching (example)
If you are uploading files to AWS S3, there is a simple way that you don't have to launch another server to process file upload or create a handler for hasura action.
Basically, when you upload files to S3, it's better to get signed url from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.
The critical point is how to get s3 signed url to upload.
In node.js, you can do
const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl);
A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch, and add some logic for authorization and even add a row to table.
You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?
After digging into AWS JS SDK, we can find signature is computed here.
return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'
It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.
So if you have a table "files"
CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
);
you can create a SQL function
CREATE OR REPLACE FUNCTION public.file_signed_url(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$
Finally, follow this to expose this computed field to Hasura.
This way allows you to be able to not add any backend stuff and handle permission all in Hasura.

Meteor - should inserting data from an api go into a method?

Real simple and short question here, should I put an API into a method when trying to upload files or keep it on the client side? And also what is the point of methods, I know it's to keep your app safe, but I am not sure how a user would change break the app. Also, can you explain when to use methods?
readImage(e){
let file = e.target.files[0];
const CLOUDINARY_URL = "my_URL";
const CLOUDIARY_UPLOAD_PRESET = "my_Upload_Preset"
let formData = new FormData();
formData.append("file", file);
formData.append("upload_preset", CLOUDIARY_UPLOAD_PRESET)
axios({
url: CLOUDINARY_URL,
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded"
},
data: formData
}).then(function(res){
console.log(res)
console.log(res.data.secure_url);
}).catch(function(err){
console.log(err);
})
console.log(file);
}
The file upload itself needs to happen on the client, but it is good practice to put any processing into a server method. If you do all the processing on the client, it means that you need to enable database access from the client, meaning that a malicious user could modify your database from the browser console.
In the server you should check if the logged in user has permission to do the operation requested. These methods can also be called by entering commands from the console, but you make the attack surface much smaller by reducing the number of operations available.

Meteor: what code goes on the client side and server side?

I just don't know exactly what I should put on the server side and what on the client side. I understand that the templates goes on the client side. But what about the javascript code? Can you give me an example of some code going on the server side?
You can write all your business logic and complex database operations in your server side code. Typically the code you don't want to serve to the client.
For example.
Method calls
# client-side
Template.post.events({
"click #add-post": function(e) {
var post, post_object;
post = $("#post-message").val().trim();
post_object = {
user_id: Meteor.userId(),
post: post
};
Meteor.call("create_post", post_object,(function(error, response) {
if(error){
..do something
}else{
.. do something else
});
);
}
});
# server-side
Meteor.methods({
create_post: function(post_object) {
return Posts.insert(post_object);
}
});
publish / subscribe
# common
Posts = new Mongo.Collection("posts");
# client-side
Meteor.subscribe("posts");
# server-side
Meteor.publish("posts", function(limit) {
return Posts.find({
user_id: this.userId
});
});
Html, css and Template managers should go into the client-side code. Meteor methods and publishers should go into the server-side code. Read more about structuring the app and data security in official docs.
Here is an example for a collection: Declare, publish and subscribe to it.
Server and client (any directory except private, client, or server, don't use public for that too), declare the collection:
Rocks = new Meteor.Collection('rocks');
Server-side (server directory or in a Meteor.isServer condition) ,publish the collection:
Meteor.publish('allRocks', function()
{
return Rocks.find();
}
Client-side (client directory or in a Meteor.isClient condition), subscribe to the publication:
Meteor.subscribe('allRocks');
You can find a lot of examples in the documentation or in this blog (Discover Meteor).
Edit: For more precision according to OP's question... All code is shared by default (executed by both the server and the client). However, files in the server and private directory will never be sent to the client.
if create a directory named client that goes only to client.
if you create a directory named server that goes only to server.
every thing else you code goes to client and server both. (even if
you use Meteor.isServer check)
you can read more about directory structure here.
You use Meteor.isClient and Meteor.isServer to load the code in the proper place.
Using the folder:
server - goes to the server duh!
client - goes to the client duh!
both - shared code
Everything that is placed outside client or server, is loaded on both places.
When you create Meteor package you've to add manually the files and specify where it should be loaded, example:
api.add_files(['my-packages.js', 'another-file.js'], 'client');
api.add_files(['server/methods.js'], 'server');
On this example althouhg you have a server folder, it doesn't mean that it be placed in the server, in the package scenario.
Something you've code that is going to run on the client and server but some functionalities might only be present at server or client.
Example:
ImageManager = {
uploadImageToAmazonS3 : function(){
if(Meteor.isServer){
//your code goes here
//YOU DON'T WANT TO SEND YOUR AMAZON PRIVATE KEY TO THE CLIENT
//BAD THINGS CAN HAPPEN LIKE A HUGE BILL
var amazonCredentials = Config.amazon.secretKey;
}
else{
throw new Error("You can't call this on the client.");
}
}
}
This a scenario where you can add functions that the client can do like: resizeImage, cropImage, etc and the server can also do this, this is shared code. Send a Private API KEY to the client is out of question but this file will be shared by the server and client.
Documentation: http://docs.meteor.com/#/basic/Meteor-isServer
According to the documentation this doesn't prevent the code from being sent to client, it simply won't run.
With this approach an attack knows how things work at the server and might try an attack vector based on the code that you sent to the him.
The best option here is extend the ImageManager only on the server. On the client this function shouldn't even exist or you can simply add a function throwing an error: "Not available".

Resources