I'm using collectionFS and in particular the S3 storage adapter.
Most files get uploaded perfectly fine and I can download them afterwards with no problem.
Occasionally I get a file that appears to load fine (Files.insert(blob,callback) returns to the callback with no error) but later when I go to download them fsFile.url() returns null.
https://github.com/CollectionFS/Meteor-CollectionFS#after-the-upload says:
If any storage adapters fail to save any of the copies in the designated store, the server will periodically retry saving them. After a configurable number of failed attempts at saving, the server will give up.
But there is no callback that I'm aware of for such a failure. Furthermore when looking at fsFile.uploadProgress() I get 100%.
The basic problem is thus that during upload everything looks fine and my app only detects a problem when trying to download the file.
Is there a way to detect an upload failure in the storage adapter?
What else would fsFile.url() returning null be symptomatic of?
Here's an example of one of these broken fsFile objects in mongodb:
{
"_id" : "uqAYajqCv68HmEJhu",
"original" : {
"updatedAt" : ISODate("2015-03-16T23:04:37.200Z"),
"size" : 699072,
"type" : ""
},
"chunkSize" : 2097152,
"chunkCount" : 0,
"chunkSum" : 1,
}
On the new version of FScollection, there was integrated 3 events on the server side, stored,uploaded and error.
Images.on('error', function (fileObj) {
console.log("The " + fileObj + " with the _id " + fileObj._id " + just get an error when uploading")
});
There is not documentation about this, since raix and aldeed clean the README.
Related
I am trying to reproduce in R the Microsoft Emotion API program located here. I obtained my API key from Microsoft's website and plugged it into the following code (also from the above linked page:
library(httr)
apiUrl <- 'https://api.projectoxford.ai/emotion/v1.0/recognizeInVideo?outputStyle=perFrame'
key <- '2a0a91c98b57415a....'
urlVideo <- 'https://drive.google.com/file/d/0B5EuiMJDiDhWdWZSUGxvV3lJM0U/view?usp=sharing'
mybody <- list(url = urlVideo)
faceEMO <- httr::POST(
url = apiUrl,
httr::content_type('application/json'),
httr::add_headers(.headers = c('Ocp-Apim-Subscription-Key' = key)),
body = mybody,
encode = 'json'
)
operationLocation <- httr::headers(faceEMO)[["operation-location"]]
The request appears to go through successfully as the "faceEMO" object returns a Response 202 code which according to Microsoft's website means:
The service has accepted the request and will start the process later.
In the response, there is a "Operation-Location" header. Client side
should further query the operation status from the URL specified in
this header.
However, when I open the given URL from the 'operationLocation' object, it says:
{ "error": { "code": "Unauthorized", "message": "Access denied due to
invalid subscription key. Make sure you are subscribed to an API you
are trying to call and provide the right key." } }
This seems to indicate that my request didn't go through after all.
Not sure why it would say my key is invalid. I tried to regenerate the key and run it again but received the same result. Maybe it has something to do with the fact that Microsoft gives me 2 keys? Do I need to use both?
To add additional info, I also tried running the next few lines of code given from the linked web site:
while(TRUE){
ret <- httr::GET(operationLocation,
httr::add_headers(.headers = c('Ocp-Apim-Subscription-Key' = key)))
con <- httr::content(ret)
if(is.null(con$status)){
warning("Connection Error, retry after 1 minute")
Sys.sleep(60)
} else if (con$status == "Running" | con$status == "Uploading"){
cat(paste0("status ", con$status, "\n"))
cat(paste0("progress ", con$progress, "\n"))
Sys.sleep(60)
} else {
cat(paste0("status ", con$status, "\n"))
break()
}
}
When I run this I'm shown a message indicating "status Running" and "progress 0" for about 30 seconds. Then it shows "status Failed" and stops running without giving any indication of what caused the failure.
It seems the API requires a direct download link, instead of what you supplied. Normally, you could simply change the url to...
urlVideo <- "https://drive.google.com/uc?export=download&id=0B5EuiMJDiDhWdWZSUGxvV3lJM0U"
I've confirmed that the Emotion API works on small test videos I've uploaded to Google Drive. Unfortunately, the issue now is that your specific video file is large enough that you run into the annoying Google Drive can't scan this file for viruses interruption. As far as I can tell, no workaround exists to bypass this. More on this here.
Your best bet is to host the video somewhere other than Google Drive, and make sure the body for the post call contains the direct download link.
I have a mobile app written using Apache Cordova. I am using Azure Mobile Apps to store some data.
I created Easy Tables and 1 Easy API. The purpose of the API is to perform delete / update more than 1 record. Below is the implementation of the API.
exports.post = function (request, response){
var mssql = request.service.mssql;
var sql = "delete from cust where deptno in ( ? )";
mssql.query(sql, [request.parameters],{
success : function(result){ response.send(statusCodes.OK, result); },
error: function(err) { response.send(statusCodes.BAD_REQUEST, { message: err}); }
});
}
Is there any other way to implement it ? The del() method on table object on takes id to delete and I didn't find any other approach to delete multiple rows in the table.
I am having difficulty to test the implementation as the changes in the API code is taking 2-3 hours on average to get deployed. I change the code through Azure website and when I run it, the old code is hit and not the latest changes.
Is there any limitation based on the plans we choose?
Update
The updated code worked.
var sql = "delete from trollsconfig where id in (" + request.body.id + ")";
mssql.query(sql, [request.parameters],{
success : function(result){ response.send(statusCodes.OK, result); },
error: function(err) { response.send(statusCodes.BAD_REQUEST, { message: err}); }
});
Let me cover the last one first. You can always restart your service to use the latest code. The code is probably there but the Easy API change is not noticing it. Once your site "times out" and goes to sleep, the code gets reloaded as normal. Logging onto the Azure Portal, selecting your site and clicking Restart should solve the problem.
As to the first problem - there are a variety of ways to implement deletion, but you've pretty much got a good implementation there. I've not run it to test it, but it seems reasonable. What don't you like about it?
I have a website that uses Meteor 0.9. I have deployed this website on OpenShift (http://www.truthpecker.com).
The problem I'm experiencing is that when I go to a path on my site (/discover), then sometimes (though not always), the data needed are not fetched by Meteor. Instead I get the following errors:
On the client side:
WebSocket connection to 'ws://www.truthpecker.com/sockjs/796/3tfowlag/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
And on the server side:
Exception from sub rD8cj6FGa6bpTDivh Error: Match error: Failed Match.OneOf or Match.Optional validation
at checkSubtree (packages/check/match.js:222)
at check (packages/check/match.js:21)
at _.extend._getFindOptions (packages/mongo-livedata/collection.js:216)
at _.extend.find (packages/mongo-livedata/collection.js:236)
at Meteor.publish.Activities.find.user [as _handler] (app/server/publications.js:41:19)
at maybeAuditArgumentChecks (packages/livedata/livedata_server.js:1492)
at _.extend._runHandler (packages/livedata/livedata_server.js:914)
at _.extend._startSubscription (packages/livedata/livedata_server.js:764)
at _.extend.protocol_handlers.sub (packages/livedata/livedata_server.js:577)
at packages/livedata/livedata_server.js:541
Sanitized and reported to the client as: Match failed [400]
Can anyone help me to eliminate this error and get the site working? I'd be very grateful!
Tony
P.S.: I never got this error using localhost.
EDIT:
The line causing the problem the problem is this (line 41):
return Activities.find({user: id}, {sort: {timeStamp: -1}, limit:40});
One document in the activities collection looks like this:
{
"user" : "ZJrgYm34rR92zg6z7",
"type" : "editArg",
"debId" : "wtziFDS4bB3CCkNLo",
"argId" : "YAnjh2Pu6QESzHQLH",
"timeStamp" : ISODate("2014-09-12T22:10:29.586Z"),
"_id" : "sEDDreehonp67haDg"
}
When I run the query done in line 41 in mongo shell, I get the following error:
error: { "$err" : "Unsupported projection option: timeStamp", "code" : 13097 }
I don't really why this is though. Can you help me there as well? Thank you.
Make sure that you are passing an integer to skip and limit. Use parseInt() if need be.
You have a document on your website that does not match your check validation.
The validation you have is in app/server/publications.js:41
So the attribute in question exists in some way like Match.optional(Match.oneOf(xx)) but the document's attribute is neither of the values in Match.oneOf
You would have to go through your documents for the collection causing this and remove or correct the attribute causing this to match your check statement.
Update for your updated question.
You're running Meteor commands in the meteor mongo/mongo shell. The error you get is unrelated to the problem in Meteor, to sort in the mongo shell you would do activities.find(..).sort(), instead of activities.find(.., { sort : {..}). This is unrelated to the issue
The issue is most-likely that your id is not actually a string. Its supposed to be sEDDreehonp67haDg for the document you're looking for. You might want to use the debugger to see what it actually is.
I don't think you can use limit in client-side find queries. Removing limit from my query solves the problem. If you're looking for pagination, then you can either manually roll your own by passing a parameter to your Activities publication so that the limit is added to the server-side query along with an offset. There is also this pagination package.
I am using QtQuick/QML/Qt5.2.1 on Android. I also tested this issue on the Desktop rather than Android and I see the same problem.
I use LocalStorage to persist application data after the application closes.
I open a database using openDatabaseSync:
var db = LocalStorage.openDatabaseSync(
"TestDB",
"1.0", <-- version
"Test Database",
1000000,
function(db) {
createSchema(db);
populateData(db);
}
);
If the database does not exist and was created, the callback function gets executed and in that case I create the database schema and populate the initial dataset.
The next time the application starts, obviously I want to keep the database as-is and not recreate it.
The problem is when I restart the application I get this error:
Error: SQL: database version mismatch
If I inspect the .ini file that was created when the database was created the first time the application was run, I see this:
[General]
Description=Test Database
Driver=QSQLITE
EstimatedSize=1000000
Name=TestDB
Version=
You can clearly see a problem here is that the "Version" attribute is empty.
When the application starts up, it compares the requested version "1.0" against this empty string "" and fails.
I can fake it to get it to work of course by specifying the version as "", or by fixing the ini file - that at least tells me the code is otherwise correct - but clearly that's not a solution.
So, did I miss something or is this a Qt bug?
You can set the database version after creating it:
var db = LocalStorage.openDatabaseSync(
"TestDB",
"1.0",
"Test Database",
1000000,
function(db) {
createSchema(db);
populateData(db);
db.changeVersion("", "1.0");
}
);
Since the callback function will only be called it the database doesn't exists, and the changeVersion function will only work if current version is "" (otherwise, exception is thrown), I believe it's safe to use it.
EDIT: Maybe this is the intended behaviour... from LocalStorage source code, line 700:
if (dbcreationCallback)
version = QString();
So, maybe you really need to set db version after you create your tables... before you do that on the callback, it's just an empty database, and shouldn't really have a version.
set the attributes like this in your above code
// db = LocalStorage.openDatabaseSync(identifier, version, description, estimated_size, callback(db))
LocalStorage.openDatabaseSync("kMusicplay", "0.1", "kMusicPlay app Ubuntu", 10000);
where
'kMusicplay' is appname ,
'0.1' is version ,
'kMusicPlay app Ubuntu' is app discription
and '10000' is size of database
I am using Meteor 4.2 (Windows) and I am always getting the "update failed: 403 -- Access denied. Can't replace document in restricted collection" when I am trying to update an object in my collection. Strangely I had no problem inserting new ones, only updates are failing.
I tried to "allow" everything on my collection:
Maps.allow({
insert: function () { return true; },
update: function () { return true; },
remove: function () { return true; },
fetch: function () { return true; }
});
But still, this update fails:
Maps.update({
_id: Session.get('current_map')
}, {
name: $('#newMapName').val()
});
Is there something else I can check? Or maybe my code is wrong? Last time I played with my project was with a previous version of Meteor (< 4.0).
Thanks for your help.
PS: Just for information, when I do this update, the local collection is updated, I can see the changes in the UI. Then very quickly it is reverted along with the error message, as the changes has been rejected by the server-side.
Alright, the syntax was actually incorrect. I don't understand really why as it was working well before, but anyway, here is the code that works fine:
Maps.update({
Session.get('current_map')
}, {
$set: {
name: $('#newMapName').val()
}
});
It seems like it must be related to what you're storing in the 'current_map' session variable. If it's a db object, then it probably looks like {_id:<mongo id here>} which would make the update finder work properly.
I ran into the same issues, and found the following to work
Blocks.update {_id:block_id}, {$set: params}
where params is a hash of all the bits i'd like to update and block_id is the mongo object id of the Block i'm trying to update.
Your note about the client side update (which flashes the update and then reverts) is expected behavior. If you check out their docs under the Data and Security section:
Meteor has a cute trick, though. When a client issues a write to the server, it also updates its local cache immediately, without waiting for the server's response. This means the screen will redraw right away. If the server accepted the update — what ought to happen most of the time in a properly behaving client — then the client got a jump on the change and didn't have to wait for the round trip to update its own screen. If the server rejects the change, Meteor patches up the client's cache with the server's result.