Connection timeout. No DDP heartbeat received - meteor

I'm trying to upload over 5,000 comments from a CSV and then insert them into a collection.
I get the following:
all done dfae22fc33f08cde515ac7452729cf4921d63ebe.js:24
insert failed: MongoError: E11000 duplicate key error index: ag5Uriwu.comments.$_id_ dup key: { : "SuvPB3frrkLs8nErv" } dfae22fc33f08cde515ac7452729cf4921d63ebe.js:1
Connection timeout. No DDP heartbeat received.
The script at hand:
'click .importComments': function(e) {
var $self = $(e.target);
$self.text("Importing...");
$("#commentsCSV").parse({
worker: true,
config: {
step: function(row) {
var data = row.data;
for (var key in data) {
var obj = data[key];
post = Posts.findOne({legacyId: obj[1]});
var comment = {
// attributes here
};
Comments.insert(comment);
Posts.update(comment.postId, {
$inc: { commentsCount: 1 },
});
}
$self.text("Import Comments");
},
complete: function(results, file) {
console.log("all done");
}
}
});
}
How can I make this work without blowing up with the connection timeout errors?
Locally it seems to work decently but on production (modulus.io) it ends pretty abruptly.

I think the problem here is not to do with DDP but with MongoDB. The DDP connection is timing out due to the MongoDB error.
You're getting a duplicate key error on the _id field. The _id field is automatically indexed by MongoDB and it is a unique index so the same value cannot appear twice in the same collection.
The CSV you're uploading likely has its own _id fields in it meaning Mongo is not generating its own binary fields (which guarantee uniqueness).
So I'd recommend removing the _id field from the CSV if it exists.
You can also try using the following package: http://atmospherejs.com/package/csv-to-collection

Related

Am I doing Firestore Transactions correct?

I've followed the Firestore documentation with relation to transactions, and I think I have it all sorted correctly, but in testing I am noticing issues with my documents not getting updated properly sometimes. It is possible that multiple versions of the document could be submitted to the function in a very short interval, but I am only interested in only ever keeping the most recent version.
My general logic is this:
New/Updated document is sent to cloud function
Check if document already exists in Firestore, and if not, add it.
If it does exist, check that it is "newer" than the instance in firestore, if it is, update it.
Otherwise, don't do anything.
Here is the code from my function that attempts to accomplish this...I would love some feedback if this is correct/best way to do this:
const ocsFlight = req.body;
const procFlight = processOcsFlightEvent(ocsFlight);
try {
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const originalFlight = await ocsFlightRef.get();
if (!originalFlight.exists) {
const response = await ocsFlightRef.set(procFlight);
console.log("Record Added: ", JSON.stringify(procFlight));
res.status(201).json(response); // 201 - Created
return;
}
await db.runTransaction(async (t) => {
const doc = await t.get(ocsFlightRef);
const flightDoc = doc.data();
if (flightDoc.recordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
console.log("Record Updated: ", JSON.stringify(procFlight));
res.status(200).json("Record Updated");
return;
}
console.log("Record isn't newer, nothing changed.");
console.log("Record:", JSON.stringify("Same Flight:", JSON.stringify(procFlight)));
res.status(200).json("Record isn't newer, nothing done.");
return;
});
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500).json(error.message);
}
The Bugs
First, you are trusting the value of req.body to be of the correct shape. If you don't already have type assertions that mirror your security rules for /collection/someFlightId in processOcsFlightEvent, you should add them. This is important because any database operations from the Admin SDKs will bypass your security rules.
The next bug is sending a response to your function inside the transaction. Once you send a response back the client, your function is marked inactive - resources are severely throttled and any network requests may not complete or crash. As a transaction may be retried a handful of times if a database collision is detected, you should make sure to only respond to the client once the transaction has properly completed.
You use set to write the new flight to Firestore, this can lead to trouble when working with transactions as a set operation will cancel all pending transactions at that location. If two function instances are fighting over the same flight ID, this will lead to the problem where the wrong data can be written to the database.
In your current code, you return the result of the ocsFlightRef.set() operation to the client as the body of the HTTP 201 Created response. As the result of the DocumentReference#set() is a WriteResult object, you'll need to properly serialize it if you want to return it to the client and even then, I don't think it will be useful as you don't seem to use it for the other response types. Instead, a HTTP 201 Created response normally includes where the resource was written to as the Location header with no body, but here we'll pass the path in the body. If you start using multiple database instances, including the relevant database may also be useful.
Fixing
The correct way to achieve the desired result would be to do the entire read->check->write process inside of a transaction and only once the transaction has completed, then respond to the client.
So we can send the appropriate response to the client, we can use the return value of the transaction to pass data out of it. We'll pass the type of the change we made ("created" | "updated" | "aborted") and the recordModified value of what was stored in the database. We'll return these along with the resource's path and an appropriate message.
In the case of an error, we'll return a message to show the user as message and the error's Firebase error code (if available) or general message as the error property.
// if not using express to wrangle requests, assert the correct method
if (req.method !== "POST") {
console.log(`Denied ${req.method} request`);
res.status(405) // 405 - Method Not Allowed
.set("Allow", "POST")
.end();
return;
}
const ocsFlight = req.body;
try {
// process AND type check `ocsFlight`
const procFlight = processOcsFlightEvent(ocsFlight);
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const { changeType, recordModified } = await db.runTransaction(async (t) => {
const flightDoc = await t.get(ocsFlightRef);
if (!flightDoc.exists) {
t.set(ocsFlightRef, procFlight);
return {
changeType: "created",
recordModified: procFlight.recordModified
};
}
// only parse the field we need rather than everything
const storedRecordModified = flightDoc.get('recordModified');
if (storedRecordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
return {
changeType: "updated",
recordModified: procFlight.recordModified
};
}
return {
changeType: "aborted",
recordModified: storedRecordModified
};
});
switch (changeType) {
case "updated":
console.log("Record updated: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Updated",
recordModified,
changeType
});
return;
case "created":
console.log("Record added: ", JSON.stringify(procFlight));
res.status(201).json({ // 201 - Created
path: ocsFlightRef.path,
message: "Created",
recordModified,
changeType
});
return;
case "aborted":
console.log("Outdated record discarded: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Record isn't newer, nothing done.",
recordModified,
changeType
});
return;
default:
throw new Error("Unexpected value for 'changeType': " + changeType);
}
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500) // 500 - Internal Server Error
.json({
message: "Something went wrong",
// if available, prefer a Firebase error code
error: error.code || error.message
});
}
References
Cloud Firestore Transactions
Cloud Firestore Node SDK Reference
HTTP Event Cloud Functions

Ionic2 - SQLite executing SQL

I'm creating app with using SQLite. I have some problems with using SQLite.
I have page with variable for managing database.
public sqlite_object: any;
In the constructor i'm opening/creating if not exists database, and saving db object to variable.
constructor(...){
let name = 'db_name';
this.openDatabase(name);
}
openDatabase(name){
let db = new SQLite();
db.create({ name: name, location: 'default' })
.then( (db_obj: SQLiteObject) => {
this.sqlite_object = db_obj
}, (error) => {
alert('error');
}
}
So, in the constructor i'm opening db, and saving it for future.
On of buttons calling that function:
testSQL(sql_queries){
this.sqlite_object.transaction( function(tx){
Object.keys(sql_queries)
.sort()
.forEach( function(v,i){
tx.executeSql(sql_queries[v],
null,
function (transaction, result){
alert('executing sql');
},
function (transaction, error){
alert('error');
});
});
}, function (error){
alert('error2');
}, function (){
alert('success');
}
}
My sql_queries have about ~30 queries (correct and incorrect).
When i put alert into forEach(), it will be executed everytime (same times as sql_queries length).
When executeSql(...) getting incorrect query, alert about error is showing, but i never seen alert 'executing sql' - is sth wrong here? (i don't know if my queries executing correctly)
Also i have one more question. How can i get list of tables from my database?
your "executing sql.." is inside a success callback function. it will be executed after the entire for each() query is successful (not after each loop). So as you said it's showing error. So, It will not execute the successcallback function. else it will execute the errorcallback function. this is beacause your entire query is not successful. I hope you get the point

Get near Users in Meteor using $near query

I have a mobile app that wants to show near users. Each 30 seconds, I want to update the near users list. In this case, for this feature, I am not using the Meteor real-time sync since I think it's too heavy. I think it's better to ask the list each 30 seconds.
For each User, I have his _id and his mapPosition [lng, lat].
My idea was to perform the $near query on the client side, since the Users list should be already in sync with the server. However I read that geo-query are not supported on the client side by minimongo. So I've created a new method on server side. (I am still not using publish/subscribe technique).
The problem is that I still not get it working.
Example of User document
var user =
{ _id : "000000",
userName : "Daniele",
mapPosition : { type: "Point",
coordinates: [lng, lat] // float
}
}
This is the code on my server side
// collections.js
Users = new Mongo.Collection('users');
Users._ensureIndex({'mapPosition.coordinates':'2dsphere'});
// methods.js
nearUsers(data){
check(data,
{
mapPosition: [Number], // [lng, lat]
userId:String // who is asking
});
return Users.find({
mapPosition: { $near: { $geometry: { type: "Point",
coordinates: data.mapPosition
},
$maxDistance: 5 *1609.34 // 5 miles in meters
}
},
'_id' : {$ne: data.userId}
}
).fetch();
}
this is the code on my client side
var getNearUsers = function()
{
var deferred = $q.defer();
var mapPosition = [parseFloat(GeolocatorService.getAppPosition().lng),
parseFloat(GeolocatorService.getAppPosition().lat)
];
Meteor.call('nearUsers',
{
userId : me.id,
mapPosition : mapPosition
},
function (err, result)
{
if (err)
{
console.error('[getNearUsers] '+err);
deferred.reject(err);
}
else
{
console.log('[getNearUsers] '+JSON.stringify(result.fetch()));
deferred.resolve(result);
}
});
return deferred.promise;
}
// call it each 30 seconds
setInterval ( function() { getNearUsers(); }, 30000);
On the server, I get this error
Exception while invoking method 'nearUsers' MongoError: Unable to execute query: error processing que$
at Object.Future.wait (/home/utente/.meteor/packages/meteor-tool/.1.3.2_2.x9uas0++os.linux.x86_32$
at SynchronousCursor._nextObject (packages/mongo/mongo_driver.js:986:47)
at SynchronousCursor.forEach (packages/mongo/mongo_driver.js:1020:22)
at SynchronousCursor.map (packages/mongo/mongo_driver.js:1030:10)
at SynchronousCursor.fetch (packages/mongo/mongo_driver.js:1054:17)
at Cursor.(anonymous function) [as fetch] (packages/mongo/mongo_driver.js:869:44)
at [object Object].nearUsers (server/methods.js:38:47)
at maybeAuditArgumentChecks (packages/ddp-server/livedata_server.js:1704:12)
at packages/ddp-server/livedata_server.js:711:19
at [object Object]._.extend.withValue (packages/meteor/dynamics_nodejs.js:56:1)
- - - - -
Tree: $and
$not
_id == "570a6aae4bd648880834e621"
lastUpdate $gt 1469447224302.0
GEONEAR field=mapPosition maxdist=8046.7 isNearSphere=0
Sort: {}
Proj: {}
planner returned error: unable to find index for $geoNear query
On the client, I get this error
[Error] Error: [filter:notarray] Expected array but received: {}
http://errors.angularjs.org/1.4.3/filter/notarray?p0=%7B%7D
http://localhost:8100/lib/ionic/js/ionic.bundle.js:13380:32
http://localhost:8100/lib/ionic/js/ionic.bundle.js:31563:31
fn
regularInterceptedExpression#http://localhost:8100/lib/ionic/js/ionic.bundle.js:27539:37
$digest#http://localhost:8100/lib/ionic/js/ionic.bundle.js:28987:43
$apply#http://localhost:8100/lib/ionic/js/ionic.bundle.js:29263:31
tick#http://localhost:8100/lib/ionic/js/ionic.bundle.js:24396:42
(funzione anonima) (ionic.bundle.js:25642)
(funzione anonima) (ionic.bundle.js:22421)
$digest (ionic.bundle.js:29013)
$apply (ionic.bundle.js:29263)
tick (ionic.bundle.js:24396)
I solved deleting the folder myAppDir/.meteor/local/db
and restarting meteor

How can I do a replace with gridfs-stream?

I'm using this code to do a file update:
app.post("/UploadFile", function(request, response)
{
var file = request.files.UploadedFile;
var name = request.param("Name");
var componentId = request.param("ComponentId");
console.log("Uploading: " + name);
var parameters =
{
filename: name,
metadata:
{
Type: "Screenshot",
ComponentId: componentId
}
};
grid.files.findOne( { "metadata.ComponentId" : componentId }, function(error, existing)
{
console.log("done finding");
if (error)
{
common.HandleError(error);
}
else
{
if (existing)
{
console.log("Exists: " + existing._id);
grid.remove({ _id: existing._id }, function(removeError)
{
if (removeError)
{
common.HandleError(removeError, response);
}
else
{
SaveFile(file, parameters, response);
}
});
}
else
{
console.log("new");
SaveFile(file, parameters, response);
}
}
});
});
function SaveFile(file, parameters, response)
{
console.log("Saving");
var stream = grid.createWriteStream(parameters);
fs.createReadStream(file.path).pipe(stream);
}
Basically I'm checking for a file that has an ID stored in metadata. If it exists, I delete it before my save, and if not I just do the save. It seems to work only sporadically. I sometimes see two erroneous behaviors:
The file will be deleted, but not recreated.
The file will appear to be updated, but it won't actually be replaced until I call my code again. So basically I need to do two file uploads for it to register the replace.
It's very sketchy, and I can't really determine a pattern for if it's going to work or not.
So I'm assuming I'm doing something wrong. What's the right way to replace a file using gridfs-stream?
It's difficult to say for sure from just the code you've provided (i.e. you don't show how the response to the app.post is ultimately handled), but I see several red flags to check:
Your SaveFile function above will return immediately after setting up the pipe between your file and the gridFS store. That is to say, the caller of the code you provide above will likely get control back well before the file has been completely copied to the MongoDB instance if you are moving around large files, and/or if your MongoDB store is over a relatively slow link (e.g. the Internet).
In these cases it is very likely that any immediate check by the caller will occur while your pipe is still running, and therefore before the gridFS store contains the correct copy of the file.
The other issue is you don't do any error checking or handling of the events that may be generated by the streams you've created.
The fix probably involves creating appropriate event handlers on your pipe, along the lines of:
function SaveFile(file, parameters, response)
{
console.log("Saving");
var stream = grid.createWriteStream(parameters);
pipe = fs.createReadStream(file.path).pipe(stream);
pipe.on('error', function (err) {
console.error('The write of " + file.path + " to gridFS FAILED: ' + err);
// Handle the response to the caller, notifying of the failure
});
pipe.on('finish', function () {
console.log('The write of " + file.path + " to gridFS is complete.');
// Handle the response to the caller, notifying of success
});
}
The function handling the 'finish' event will not be called until the transfer is complete, so that is the appropriate place to respond to the app.post request. If nothing else, you should get useful information from the error event to help in diagnosing this further.

Meteor Collection is not sent after upgrade to 0.5.9

For some reason after upgrading to 0.5.9 I'm having the issue where server side seems to send everything correctly, however client side says it has received nothing.
//server:
Meteor.publish("orders", function (ordersQueryParams) {
console.log("orders publish: " + JSON.stringify(ordersQueryParams));
if (this.userId && ordersQueryParams){
console.log("orders collection: " + Orders.find({"customer._id": this.userId}, ordersQueryParams).count());
return Orders.find({"customer._id": this.userId}, ordersQueryParams);
}
});
//client:
var ordersPreferences = {
table: {
size: 10
},
query: {
sort: {createdDate:-1},
skip : 0,
limit : 10
}
};
Session.set("ordersPreferences", ordersPreferences);
Meteor.autorun(function(){
var ordersPreferences = Session.get("ordersPreferences");
console.log('subscribing to orders');
Meteor.subscribe("orders", ordersPreferences.query);
}
//both:
Orders = new Meteor.Collection("orders");
Deps.autorun(function(){
if (Meteor.isServer)
console.log("on server orders count is " + Orders.find().count());
if (Meteor.isClient)
console.log("on client orders count is " + Orders.find().count());
});
Server log:
on server orders count is 26
orders publish: {"sort":{"createdDate":-1},"skip":0,"limit":10}
orders collection: 26
Client log:
subscribing to orders
on client orders count is 0
Why server says there are 26 docs, however client insists on 0?
It's driving me nuts :(
I found the problem:
I was "waiting" for my Meteor.user() to become available and had this autorun:
Meteor.autorun(function(handle){
var ordersPage = new OrdersPage();
if (Meteor.user()) {
ordersPage.init();
ordersPage.autorun();
handle.stop();
}
});
if (Meteor.user()) {
return "orders";
}
Once the Meteor.user() was found this function does not need to run, hence I had handle.stop().
Apparently from 0.5.9 handle.stop() stops not only the immediate autorun but everything underneath as well (including the collections).
Might be a bug introduced in Meteor... or might be a new feature.

Resources