Meteor CollectionFS / GridFS verify correct upload of a file - meteor

In our Meteor App we sometimes run into the issue, after uploading a file that the generated download link is .../null and the file is not retrievable anymore or never was uploaded correctly.
There are no errors logged at all.
FileUploads.insert(file, function (err, fileObj) {
if (err) { // this error never triggers
log(err);
} else {
if(fileObj.isUploaded()) { // is true after correct or non correct upload
FileUploads.find(fileObj._id); // fetches correct metadata even if file upload was corrupt
if(fileObj.url()=== 'null') {
throw Meteor.Error(...) // never thrown even when URL was "null"
}
}
}
List of used cfs packages:
cfs:access-point#0.1.49_2
cfs:base-package#0.0.30
cfs:collection#0.5.5
cfs:collection-filters#0.2.4
cfs:data-man#0.0.6
cfs:file#0.1.17
cfs:gridfs#0.0.34
cfs:http-methods#0.0.32
cfs:http-publish#0.0.13
cfs:power-queue#0.9.11
cfs:reactive-list#0.0.9
cfs:reactive-property#0.0.4
cfs:standard-packages#0.5.10
cfs:storage-adapter#0.2.4
cfs:tempstore#0.1.6
cfs:upload-http#0.0.20
cfs:worker#0.1.5
So how can we make sure the file was uploaded correctly right after an upload?
fileObj,isUploaded() seems not reliable. Is there a better way to verify the correct upload?

Related

Viewing firebase storage dms files

I'm on a Mac. What application can open the dms files you download directly from the console?
Preview doesn't work, and I also tried Unarchiver per the following link, but no go:
https://fileinfo.com/extension/dms
Okay, so it turns out that you only need change the extension. In my case it was from .dms to .jpg.
I think you're trying to download the Image with DownloadURL which are stored in Firebase Storage but you're getting a file with dms extension instead of actual image file. The problem is that you're not providing Content Type of Meta Data while uploading it.
By doing this, the type of the image is application/octet-stream.
Now if you download the file with Download URL, you'll get file with dms extension.
If you want to download the actual image then just add the Content Type of Meta Data and pass it in putData method.
let metaDataForImage = StorageMetadata()
metaDataForImage.contentType = “image/jpeg"
Now see the image in Firebase Storage. The type of the image is image/jpeg.
Just try downloading the image with DownloadURL. The file will be downloaded with image(jpeg) extension instead of dms extension. That's it!
Sample Code Snippet :
let metaDataForImage = StorageMetadata()
metaDataForImage.contentType = "image/jpeg"
storageRef.putData(imageData, metadata: metaDataForImage, completion: { (metadata, error) in
print("Image uploaded to Firebase successfully")
//MARK: - Uploading Image URL
self.storageRef.downloadURL(completion: { (url, error) in
if url != nil {
guard let profileImageURL = url?.absoluteString else { return }
FirebaseDatabaseReference.users(uid: userUID).reference().updateChildValues(["profileImageURL": profileImageURL], withCompletionBlock: { (error, reference) in
if error == nil {
print("URL Uploaded to Firebase DB")
} else {
print("Failed to upload Image URL to Firebase DB")
}
})
}
})
})

File download from API to Meteor server and upload to S3

I am sending a request from my Meteor server to download a file via an API. I then want to upload that file to S3. I keep getting the following "NoSuchKey: The specified key does not exist." I initially thought it was maybe a problem with my AcessKey/SecretKey form AWS but after googling this for a while the only examples I could find of other people getting this error is when trying to download a file from S3.
Setting up cfs:s3
var imageStore = new FS.Store.S3("images", {
accessKeyId: "MyAcessKeyId", //required if environment variables are not set
secretAccessKey: "MySecretAcessKey", //required if environment variables are not set
bucket: "BucketName", //required
});
Images = new FS.Collection("images", {
stores: [imageStore]
});
Start file transfer from API and upload to S3
client.get_result(id, Meteor.bindEnvironment(function(err, result){ //result is the download stream and id specifies which file to download.
if (err !== null){
return;
}
var file = new FS.File(result);
Images.insert(file, function (err, fileObj) {
if (err){
console.log(err);
}
});
}));
Note: I was getting the following error so I added Meteor.bindEnvironment.
"Meteor code must always run within a Fiber. Try wrapping callbacks that you pass to non-Meteor libraries with Meteor.bindEnvironment."
Node.js example from API Documentation
client.get_result(id, function(err, result){
if (err != null) {
return;
}
file.writeFile(path.join('public', path.join('results', filename)), result, 'binary');
});
What ended up fixing the problem for me was moving part of the setup to the lib folder. Although I tried several different ways I was unable to get it to execute entirely on the server. It looks like the documentation was updated recently which states everything a bit more clearly. If you follow this setup it should eliminate the error. See the section titled Client, Server, and S3 credentials
https://github.com/CollectionFS/Meteor-CollectionFS/tree/master/packages/s3
Note: Make sure not to place you secret key is not in you lib folder as this is accessible from the client.

How to protect a file directory and only allow authenticated users to access the files?

how do I restrict a folder, so only those who logged in into my Meteor app can download files?
I looked into multiple ways of doing this, but the main problem is that I can't access ( I get null.) with:
Meteor.user() or this.userId()
I tried:
__meteor_bootstrap__.app
.use(connect.query())
.use(function(req, res, next) {
Fiber(function () {
// USER HERE?
}).run();
});
or
__meteor_bootstrap__.app.stack.unshift({
route: "/protected/secret_document.doc", // only users can download this
handle: function(req, res) { Fiber(function() {
// CHECK USER HERE ?
// IF NOT LOGGED IN:
res.writeHead(403, {'Content-Type': 'text/html'});
var content = '<html><body>403 Forbidden</body></html>';
res.end(content, 'utf-8');
}).run() }
});
You could try storing the files in mongodb, which would mean that they would then be hooked into your collection system and be queryable on the client and server. Then, just publish the relevant data to the client for specific users, or use Meteor.methods to expose information that way.
Example:
Assuming files are stored in MongoDB, let's first publish them to the client:
Meteor.publish("files", function(folder) {
if (!this.userId) return;
// the userHasAccessToFolder method checks whether
// this user is allowed to see files in this folder
if (userHasAccessToFolder(this.userId, folder))
// if so, return the files for that folder
// (filter the results however you need to)
return Files.find({folder: folder});
});
Then on the client, we autosubscribe to the published channel so that whenever it changes, it gets refreshed:
Meteor.startup(function() {
Meteor.autosubscribe(function() {
// send the current folder to the server,
// which will return the files in the folder
// only if the current user is allowed to see it
Meteor.subscribe("files", Session.get("currentFolder"));
});
});
NB. I haven't tested above code so consider it pseudocode, but it should point you in the general direction for solving this problem. The hard part is storing the files in mongodb!
i'd be more concerned as to why Meteor.user() isn't working.
a few questions:
are you on meteor 0.5.0?
have you added accounts-base to your meteor project?
have you used one of meteor's login systems (accounts-password, accounts-facebook, etc)? (optional - accounts-ui for ease of use?)
have you still got autopublish on? or have you set up publishing / subscription properly?
Meteor.user() should be the current user, and Meteor.users should be a Meteor Collection of all previous logged in users.

In Node.JS, when I do a POST request, what is the maximum size? [duplicate]

I created an upload script in node.js using express/formidable. It basically works, but I am wondering where and when to check the uploaded file e. g. for the maximum file size or if the file´s mimetype is actually allowed.
My program looks like this:
app.post('/', function(req, res, next) {
req.form.on('progress', function(bytesReceived, bytesExpected) {
// ... do stuff
});
req.form.complete(function(err, fields, files) {
console.log('\nuploaded %s to %s', files.image.filename, files.image.path);
// ... do stuff
});
});
It seems to me that the only viable place for checking the mimetype/file size is the complete event where I can reliably use the filesystem functions to get the size of the uploaded file in /tmp/ – but that seems like a not so good idea because:
the possibly malicious/too large file is already uploaded on my server
the user experience is poor – you watch the upload progress just to be told that it didnt work afterwards
Whats the best practice for implementing this? I found quite a few examples for file uploads in node.js but none seemed to do the security checks I would need.
With help from some guys at the node IRC and the node mailing list, here is what I do:
I am using formidable to handle the file upload. Using the progress event I can check the maximum filesize like this:
form.on('progress', function(bytesReceived, bytesExpected) {
if (bytesReceived > MAX_UPLOAD_SIZE) {
console.log('### ERROR: FILE TOO LARGE');
}
});
Reliably checking the mimetype is much more difficult. The basic Idea is to use the progress event, then if enough of the file is uploaded use a file --mime-type call and check the output of that external command. Simplified it looks like this:
// contains the path of the uploaded file,
// is grabbed in the fileBegin event below
var tmpPath;
form.on('progress', function validateMimetype(bytesReceived, bytesExpected) {
var percent = (bytesReceived / bytesExpected * 100) | 0;
// pretty basic check if enough bytes of the file are written to disk,
// might be too naive if the file is small!
if (tmpPath && percent > 25) {
var child = exec('file --mime-type ' + tmpPath, function (err, stdout, stderr) {
var mimetype = stdout.substring(stdout.lastIndexOf(':') + 2, stdout.lastIndexOf('\n'));
console.log('### file CALL OUTPUT', err, stdout, stderr);
if (err || stderr) {
console.log('### ERROR: MIMETYPE COULD NOT BE DETECTED');
} else if (!ALLOWED_MIME_TYPES[mimetype]) {
console.log('### ERROR: INVALID MIMETYPE', mimetype);
} else {
console.log('### MIMETYPE VALIDATION COMPLETE');
}
});
form.removeListener('progress', validateMimetype);
}
});
form.on('fileBegin', function grabTmpPath(_, fileInfo) {
if (fileInfo.path) {
tmpPath = fileInfo.path;
form.removeListener('fileBegin', grabTmpPath);
}
});
The new version of Connect (2.x.) has this already baked into the bodyParser using the limit middleware: https://github.com/senchalabs/connect/blob/master/lib/middleware/multipart.js#L44-61
I think it's much better this way as you just kill the request when it exceeds the maximum limit instead of just stopping the formidable parser (and letting the request "go on").
More about the limit middleware: http://www.senchalabs.org/connect/limit.html

Attempting to open a user-specified file and process it, path is lost

So, I'm working on my first ASP.NET MVC 3 application and one thing I need to do is handle some data that is exported from someone else's system and turn around and import it, on user action, into the system and perform some error checking, etc. on it.
Here's how I have attempted to solve this issue:
I've got a view with a div:
<div>
<span><b>Recipe Data:</b>
<input type="file" name="uploadFile" />
<input type="submit" value="Load" />
</span>
</div>
and that allows me to choose a file and then submit it. Then I've got a controller action that looks like this:
[HttpPost]
public ActionResult Index(HttpPostedFileBase uploadFile)
{
try
{
// attempt to read the file
}
catch (Exception)
{
throw;
}
}
So, when I'm using IE, I can examine the uploadFile parameter and it gives me a path like:
FileName:c:\\Users\\Matt\\Desktop\\TestFiles\\AppleBerry.xml
(which is exactly the full path to the file I picked)
But when I try the same thing in FireFox, that path is stripped off, so uploadFile.FileName is just AppleBerry.xml and the XDocument.Load tries to load it from:
C:\Program Files (x86)\Common files\Microsoft Shared\DevServer\10.0\AppleBerry.xml
So, I'm pretty sure that I'm going about this the wrong way and need some guidance. I need to read in that XML file, preferably via XDocument.Load() and then do some checks and eventually push the records in that xml file into a DB table. The only part I'm having issues with is this file path. Any help you can provide with this would be most appreciated.
Try loading the file directly from the request stream and don't rely on the FileName property because you haven't saved the file on the server yet so it won't find it:
[HttpPost]
public ActionResult Index(HttpPostedFileBase uploadFile)
{
if (uploadFile != null && uploadFile.ContentLength > 0)
{
try
{
// attempt to read the file
var doc = XDocument.Load(uploadFile.InputStream);
// TODO: do something with the XML document
}
catch (Exception)
{
// Make sure you do something more meaningful here
// instead of rethrowing and erasing the stacktrace
throw;
}
}
else
{
// The user didn't upload any file => take respective actions
}
}
The server does not have access to the client's file system so the original path is irrelevant. Furthermore the file is not saved onto the server file system, so you should be loading it from the InputStream property, as per Darin's answer.

Resources