I am sending a request from my Meteor server to download a file via an API. I then want to upload that file to S3. I keep getting the following "NoSuchKey: The specified key does not exist." I initially thought it was maybe a problem with my AcessKey/SecretKey form AWS but after googling this for a while the only examples I could find of other people getting this error is when trying to download a file from S3.
Setting up cfs:s3
var imageStore = new FS.Store.S3("images", {
accessKeyId: "MyAcessKeyId", //required if environment variables are not set
secretAccessKey: "MySecretAcessKey", //required if environment variables are not set
bucket: "BucketName", //required
});
Images = new FS.Collection("images", {
stores: [imageStore]
});
Start file transfer from API and upload to S3
client.get_result(id, Meteor.bindEnvironment(function(err, result){ //result is the download stream and id specifies which file to download.
if (err !== null){
return;
}
var file = new FS.File(result);
Images.insert(file, function (err, fileObj) {
if (err){
console.log(err);
}
});
}));
Note: I was getting the following error so I added Meteor.bindEnvironment.
"Meteor code must always run within a Fiber. Try wrapping callbacks that you pass to non-Meteor libraries with Meteor.bindEnvironment."
Node.js example from API Documentation
client.get_result(id, function(err, result){
if (err != null) {
return;
}
file.writeFile(path.join('public', path.join('results', filename)), result, 'binary');
});
What ended up fixing the problem for me was moving part of the setup to the lib folder. Although I tried several different ways I was unable to get it to execute entirely on the server. It looks like the documentation was updated recently which states everything a bit more clearly. If you follow this setup it should eliminate the error. See the section titled Client, Server, and S3 credentials
https://github.com/CollectionFS/Meteor-CollectionFS/tree/master/packages/s3
Note: Make sure not to place you secret key is not in you lib folder as this is accessible from the client.
Related
In our Meteor App we sometimes run into the issue, after uploading a file that the generated download link is .../null and the file is not retrievable anymore or never was uploaded correctly.
There are no errors logged at all.
FileUploads.insert(file, function (err, fileObj) {
if (err) { // this error never triggers
log(err);
} else {
if(fileObj.isUploaded()) { // is true after correct or non correct upload
FileUploads.find(fileObj._id); // fetches correct metadata even if file upload was corrupt
if(fileObj.url()=== 'null') {
throw Meteor.Error(...) // never thrown even when URL was "null"
}
}
}
List of used cfs packages:
cfs:access-point#0.1.49_2
cfs:base-package#0.0.30
cfs:collection#0.5.5
cfs:collection-filters#0.2.4
cfs:data-man#0.0.6
cfs:file#0.1.17
cfs:gridfs#0.0.34
cfs:http-methods#0.0.32
cfs:http-publish#0.0.13
cfs:power-queue#0.9.11
cfs:reactive-list#0.0.9
cfs:reactive-property#0.0.4
cfs:standard-packages#0.5.10
cfs:storage-adapter#0.2.4
cfs:tempstore#0.1.6
cfs:upload-http#0.0.20
cfs:worker#0.1.5
So how can we make sure the file was uploaded correctly right after an upload?
fileObj,isUploaded() seems not reliable. Is there a better way to verify the correct upload?
According to cloudinary's documentation one should be able to upload an image to cloudinary using google cloud storage.
However when I attempt to do so, I get the following error in my cloud functions logs.
ENOENT: no such file or directory, open 'gs://my-bucket.appspot.com/01.jpg'
this is my cloud function:
import * as functions from 'firebase-functions';
import * as cloudinary from 'cloudinary';
cloudinary.config({
cloud_name: functions.config().cloudinary.cloudname,
api_key: functions.config().cloudinary.apikey,
api_secret: functions.config().cloudinary.apisecret,
});
export const uploadImageToCloudinary = functions.storage
.object()
.onFinalize(object => {
cloudinary.v2.uploader.upload(
`gs://${object.bucket}/${object.name}`,
function(error, result) {
if (error) {
console.log(error)
return;
}
console.log(result);
}
);
})
I have added /.wellknown/cloudinary/<cloudinary_cloudname> to my bucket as well added permission in cloud platform to allow cloudinary object viewer access
Is there an extra step I'm missing - I can't seem to get this working?!
Cloudinary does support Google cloud storage upload, but it's a relatively new feature and the current version of the node SDK doesn't handle gs:// urls.
In your example, it's trying to resolve the gs:// URL on the local server and send the image to Cloudinary, rather than sending the URL to Cloudinary so the fetch happens from Cloudinary's side.
Until this is added to the SDK, you could get this working by triggering the fetch using the URL-based upload method, or by making a small change to the SDK code.
Specifically, it's a small change in lib/uploader.js - you need to add the gs: prefix there, after which it should work OK.
Diff:
diff --git a/lib/uploader.js b/lib/uploader.js
index 2f71eaa..af08e14 100644
--- a/lib/uploader.js
+++ b/lib/uploader.js
## -65,7 +65,7 ##
return call_api("upload", callback, options, function() {
var params;
params = build_upload_params(options);
- if ((file != null) && file.match(/^ftp:|^https?:|^s3:|^data:[^;]*;base64,([a-zA-Z0-9\/+\n=]+)$/)) {
+ if ((file != null) && file.match(/^ftp:|^https?:|^gs:|^s3:|^data:[^;]*;base64,([a-zA-Z0-9\/+\n=]+)$/)) {
return [
params, {
file: file
After applying that diff, I did successfully fetch an image from Google Cloud Storage
I am trying to create a file upload feature in Meteor where a logged in user is able to upload a file to the server under a directory named after their username. I have the basics working but when I take it a step further by checking the logged in user ID, things start breaking. Specifically:
WebApp.connectHandlers.use('/upload/', function(req, res) {
if (this.userId) {
// Do cool stuff.
} else {
res.writeHead(500, {"content-type":"text/html"});
res.end("this.userId = " + this.userId); // End the response.
}
});
Result:
this.userId = undefined
And...
WebApp.connectHandlers.use('/upload/', function(req, res) {
if (Meteor.userId()) {
// Do cool stuff.
} else {
res.writeHead(500, {"content-type":"text/html"});
res.end("Meteor.userId() = " + Meteor.userId()); // End the response.
}
});
Result:
Error: Meteor.userId can only be invoked in method calls. Use this.userId in publish functions.
at Object.Meteor.userId (packages/accounts-base/accounts_server.js:19:1)
at Object.Package [as handle] (packages/cool_package/upload.js:34:1)
at next (/Users/me/.meteor/packages/webapp/.1.2.0.19shc3d++os+web.browser+web.cordova/npm/node_modules/connect/lib/proto.js:190:15)
at Function.app.handle (/Users/me/.meteor/packages/webapp/.1.2.0.19shc3d++os+web.browser+web.cordova/npm/node_modules/connect/lib/proto.js:198:3)
at Object.fn [as handle] (/Users/me/.meteor/packages/webapp/.1.2.0.19shc3d++os+web.browser+web.cordova/npm/node_modules/connect/lib/proto.js:74:14)
at next (/Users/me/.meteor/packages/webapp/.1.2.0.19shc3d++os+web.browser+web.cordova/npm/node_modules/connect/lib/proto.js:190:15)
at Object.WebAppInternals.staticFilesMiddleware (packages/webapp/webapp_server.js:331:1)
at packages/webapp/webapp_server.js:625:1
The code above is included in a Meteor package I'm developing. The package.js file specifies that the code should run on the server:
api.add_files("upload.js", "server");
So my questions are:
What is the correct way to check the logged in user ID and username?
Can this code be moved to an Iron Router route instead?
It looks like the line
WebApp.connectHandlers.use('/upload/', function(req, res) {
Is Express.js or similar code -- if so, you have broken out of the Meteor frameowrk providing your own REST services etc. If that is the case you also have to provide your own user management and authentication scheme for incoming REST calls, just as you would in any other bare-bones REST applications
Im trying let the user Upload a txt file and then let him click a button "analyze" and then perform some analysis.
I have the app working locally, Im using FS.Collection and FileSystem however I had several problems deploying to meteor.com. Here is my collection:
FS.debug = true;
Uploads = new FS.Collection('uploads', {
stores: [new FS.Store.FileSystem('uploads')]
});
and here is how I try to read the uploaded file:
var fs = Npm.require('fs');
var readedFile = fs.readFileSync(process.env.PWD+'/.meteor/local/cfs/files/uploads/+file.copies.uploads.key, 'utf-8');
The above works in local but not after I deploy to meteor.com, in the debug messages I see something like this: Error: ENOENT, no such file or directory
So I do not know how to read the file when the app is deployed, how would you do it?, or do you think I should deploy the app to Amazon EC2? Im afraid to deploy to amazon and have the same problem...
Short example of using http to download a file that was uploaded via collectionFS.
var file = Uploads.findOne({ _id: myId }); // or however you find it
HTTP.get(file.url(),function(err,result){
// this will be async obviously
if ( err ) console.log("Error "+err+" downloading file"+myId);
else {
var content = result.content; // the contents of the file
// now do something with it
}
});
Note that you must meteor add http to get access to the http package.
This is probably the package you want:
https://github.com/tomitrescak/meteor-uploads
it has a nice UI too and much less trouble than FSCollection.
how do I restrict a folder, so only those who logged in into my Meteor app can download files?
I looked into multiple ways of doing this, but the main problem is that I can't access ( I get null.) with:
Meteor.user() or this.userId()
I tried:
__meteor_bootstrap__.app
.use(connect.query())
.use(function(req, res, next) {
Fiber(function () {
// USER HERE?
}).run();
});
or
__meteor_bootstrap__.app.stack.unshift({
route: "/protected/secret_document.doc", // only users can download this
handle: function(req, res) { Fiber(function() {
// CHECK USER HERE ?
// IF NOT LOGGED IN:
res.writeHead(403, {'Content-Type': 'text/html'});
var content = '<html><body>403 Forbidden</body></html>';
res.end(content, 'utf-8');
}).run() }
});
You could try storing the files in mongodb, which would mean that they would then be hooked into your collection system and be queryable on the client and server. Then, just publish the relevant data to the client for specific users, or use Meteor.methods to expose information that way.
Example:
Assuming files are stored in MongoDB, let's first publish them to the client:
Meteor.publish("files", function(folder) {
if (!this.userId) return;
// the userHasAccessToFolder method checks whether
// this user is allowed to see files in this folder
if (userHasAccessToFolder(this.userId, folder))
// if so, return the files for that folder
// (filter the results however you need to)
return Files.find({folder: folder});
});
Then on the client, we autosubscribe to the published channel so that whenever it changes, it gets refreshed:
Meteor.startup(function() {
Meteor.autosubscribe(function() {
// send the current folder to the server,
// which will return the files in the folder
// only if the current user is allowed to see it
Meteor.subscribe("files", Session.get("currentFolder"));
});
});
NB. I haven't tested above code so consider it pseudocode, but it should point you in the general direction for solving this problem. The hard part is storing the files in mongodb!
i'd be more concerned as to why Meteor.user() isn't working.
a few questions:
are you on meteor 0.5.0?
have you added accounts-base to your meteor project?
have you used one of meteor's login systems (accounts-password, accounts-facebook, etc)? (optional - accounts-ui for ease of use?)
have you still got autopublish on? or have you set up publishing / subscription properly?
Meteor.user() should be the current user, and Meteor.users should be a Meteor Collection of all previous logged in users.