I have an array of data that I want to fill some collection with. I have this collection as SomeCollection.
If I go through array like this
_.each(dataArray, function(d) {
var retId = SomeCollection.insert(d);
console.warn(retId);
});
Where dataArray has 720 unique items with unique _id's.
On loop execution I get all retIds returned and no errors.
If I write SomeCollection.count() after that, I get 720.
If I reload page after that, SomeCollection.count() gives some number less than 720 (each reload the same number, but number different after 'filling' script re-execution), it can be 320, 521, etc.
I do it on client with an 'admin' user who have whole SomeCollection published and subscribed.
Collection is clean before this loop; I remove all item from it explicitly.
Why is it happens?
Problem was I just reloaded page too fast. After loop ended, insertion process seems to be still active.
Related
I have an app where every sale is registered as a document in the Tracking collection.
Another collection is the Actuals. There are stored documents for each user with the totalValue field for the current month.
TotalValue is calculated in this way - when the sale is added or modified (onCreate / onUpdate functions) the value of the field totalValue is taken from the document, it's calculated (for example is incremented by 100 as there was a new sale added with value=100) and then pushed back to that document with the new value. (In fact there are many other fields that are calculated and I'm updating the whole document but let's have an example with only that totalValue.)
The code looks like this:
exports.updateActuals = functions.region(util_1.getFunctionRegion()).firestore
.document(`${trackingCollectionName}/{id}`)
.onUpdate((change) => {
const service = new TargetService();
if (<TrackingDto>change.after.data().isDeleted) {
return service.getAndDeleteFromActuals(change.before.id, <TrackingDto>change.before.data())
.then((actual: ActualsDto) => admin.firestore().doc(`${actualsCollectionName}/${actual.id}`).set(actual.data))
.catch(er => error(er))
} else {
return service.getAndUpdateActuals(change.before.id, <TrackingDto>change.after.data(), <TrackingDto>change.before.data())
.then((actual: ActualsDto) => admin.firestore().doc(`${actualsCollectionName}/${actual.id}`).set(actual.data))
.catch(er => error(er));
}
});
If the user works in online mode this onUpdate function works like a charm.
The problem occurs when the user works in offline mode. When he switched back to online mode all sales he has made during offline mode are sent to firebase. So for example at the same time, 10 onUpdate functions will be triggered by those registered sales. It means that his document from the "Actuals" collection will be edited by 10 onUpdate functions. This causes wrong calculations as they are sharing the same value (totalValue field).
Example of the problem:
...
totalValue = 100
onUpdate_1 takes totalValue=100 and adds 10
totalValue = 110
onUpdate_2 takes totalValue=110 and adds 20
onUpdate_3 takes totalValue=110 and adds 30
totalValue = 140
...
Is there a way to force onUpdate to wait until a specific document will be not in use or delay adding sales documents to firebase (but not as threads on the app side)? Is there another option?
If the new value of a field depends on its current value, you should always use a transaction - or an atomic increment operation to prevent the problem you now have.
In this case I'd use admin.firestore.FieldValue.increment(10), which ensures the database itself will read-increment-write the value and thus prevent the problem you're now seeing.
You can also tell Cloud Functions to never run more than once instance of a Cloud Function at a time by setting maxInstances in your code.
Db structure:
--followers
-followedUser1
-user1
-followedUser2
-user1
-user2
--users
-user1
-followed
-followedUser1
-followedUser2
-user2(followedUser1)
-followed
-followedUser2
-user3(followedUser2)
Everytime user follows(onCreate) & unfollows(onDelete) under followers/{followedUser}/{followerUser} path, it triggers function which increment or derement count and assigning or detaching posts from follower. It's working via fanout method, and there're no problems. Now, worse part comes when some user deletes account completely together with detaching his followers from himself(because his account will be a ghost), i've set trigger onDelete to indicate whenever it'll happen, then iterating through this user's (i.e.user3) followers removes himself from corresponding followers plus his account, it looks like this then:
--followers
-followedUser1
-user1
-followedUser2
-user1
-user2
--users
-user1
-followed
-followedUser1
-user2(followedUser1)
Now, problematic part - when promise returns i'd like to also remove whole follower/followedUser2(because it's a ghost now) path but... there is a trigger which unfortunately executes for every follower under(onDelete). So, is there any chance to remove path above(levelup) deletetion trigger without triggering children itself? Or any other approach would be great, thanks
edit: Dont get me wrong, it'll work but if number of followers under followedUser would be "a lot" server will die after 100... trigger
At this time, there is no way to filter what deletion events will trigger the function, so you are correct, the function will be triggered once for each user the deleted user was following. I recognize this is one of many use cases where such functionality would be useful, so if you get a chance, please fill out a feature request here.
It looks like your multiple-update problem can be fixed with a multi-location updates
In very quickly hacked together and not tested typescript:
export const cleanupFollowers = functions.auth.user().onDelete(event => {
const user = event.data.userId;
const followersNode = admin.database().ref(`followers/${user}`);
const followers = _.keys(await followersNode.once('value'));
// Every follower also has a reverse node for this user. Get the list of keys:
const reverseNodesToDelete = followers.map(follower => `followers/${follower}/${user}`);
// Model this update as a map of deep key -> null to delete all at once
let cleanup = _.keyBy(reverseNodesToDelete, null);
// add one more update: deleting full node for the deleted user.
cleanup[`followers/${user}`] = null;
// do all deletions as one database request:
return admin.database().ref().update(cleanup);
}
Note that this will still fire your counting function, but that should be fine to run in parallel. It probably makes your app simpler to have each invariant captured separately.
We noticed that Anywhere is grouping and ordering the transactions such that the transactions for the parent [i.e.: Work Order] were sent first and then the transaction for the child records [e.g.: Specifications]
Scenario:
Step 1. Alter the description on the WO
Step 2. Enter Specification values
Step 3. Change the WO Status to COMP
The resulting transactions are sent as follows
Step1 and Step3 are grouped and sent to Maximo
On success
Step 2 is sent to Maximo
We want the messages to be sent in the same order that they happened and the reason for this is the validations we have in place in Maximo
e.g.: We validate if the child table has records [in our case, we check if the specifications are populated] before we Complete a WO
Due to the re-order of the events\transactions we are unable to COMP a WO from the device as the child transaction never gets to Maximo because the Parent transaction failing due to missing child data [catch 22]
We found the piece of code in the [/MaximoAnywhere/apps/WorkExecution/common/js/platform/model/PushingCoordinatorService.js] JS file that does this re-order and we commented out the reorder
//if (!transaction.json[PlatformConstants.TRANSACTION_LOCK_FORUPDATE])
//{
// Logger.trace("[PUSHING] Trying to shrink/merge transactions and lock transactions");
// var self = this;
// var promise = this._shrinkSubTransactions(metadata, transaction);
//
// Logger.trace("[PUSHING] going to perform async operations");
// promise.then(function() {
// self._pushSubTransactions(transaction, deferred);
// });
//}
//else
//{
Logger.trace("[PUSHING] going to perform async operations");
this._pushSubTransactions(transaction, deferred);
//}
Once this was done we were able to COMP the WO from the device as the events/transaction are now sent in the same order as they occurred
However, we have noticed that this has created another undesirable problem where on an error the device ends up with two Work Orders the one with the error and the one it refetched from Maximo
Scenario: We have an active timer running on the WO and we click on the clock. This will bring up the Stop Timer View and we select [Complete Work]
So there are two things that should happen the timer should be stopped and the status should be changed.
Due to some validation error from Maximo this transaction fails. The result is that we end up with the same wok order twice one with the new status and the error message and one it re-fetched from Maximo
Once we go into the record with the error and undo the change we end up with two identical WOs on the device
Apart from the above issue, there needs to be a way to clear the local data from the device without having to delete the app
You could try putting some Model.save()'s in, or in the app.xml you can force a save when you show/hide a view.
Without the saves I think that everything gets put into one change ... sent as one message ... and you lose control over how it gets unpicked.
Diggging this one up from the grave, but you can create something called a "priority transaction" that will capture all changes and package them in an isolated request and send it back to the server.
westarAssignmentStatusChange:function(workOrder){
workOrder.openPriorityChangeTransaction();
workOrder.set(ATTRIBUTE,VALUE);
workOrder.closePriorityChangeTransaction();
};
This will send an update to the server to change the ATTRIBUTE of the WORKORDER to VALUE.
We used this to isolate changes of specific items and made sure they processed in the appropriate order.
I've got a simple CSV file with 40,000 rows which I'm processing browser-side with papa-parse.
I'm trying to insert them one-by-one into a collection using the techniques in Discover Meteor and other 101 posts I find when Googling.
40000 insert browser-side pretty quickly - but when I check mongo server side it's only got 387 records.
Eventually (usually after 20 seconds or so) it starts to insert server-side.
But if I close or interrupt the browser, the already-inserted records disappear obviously.
How do I force inserts to go server-side, or at least monitor so I know when to notify the user of success?
I tried Tracker.flush() no difference.
I'd go server-side inserts in a Meteor.method, but all the server-side CSV libraries are more complex to operate than client-side (I'm a beginner to pretty much everything programming :)
Thanks!
This is the main part of my code (inside client folder):
Template.hello.events({
"submit form": function (event) {
event.preventDefault();
var reader = new FileReader();
reader.onload = function (event) {
var csv = Papa.parse(this.result, {header: true});
var count = 0;
_.forEach(csv.data, function (csvPerson) {
count++;
Person.insert(csvPerson);
console.log('Inserting: ' + count + ' -> ' + csvPerson.FirstName);
});
};
reader.readAsText(event.target[0].files[0]);
}
});
The last few lines of console output:
Inserting: 39997 -> Joan
Inserting: 39998 -> Sydnee
Inserting: 39999 -> Yael
Inserting: 40000 -> Kirk
The last few lines of CSV (random generated data):
Jescie,Ayala,27/10/82,"P.O. Box 289, 5336 Tristique Road",Dandenong,7903,VI,mus.Proin#gravida.co.uk
Joan,Petersen,01/09/61,299-1763 Aliquam Rd.,Sydney,1637,NS,sollicitudin#Donectempor.ca
Sydnee,Oliver,30/07/13,Ap #648-5619 Aliquam Av.,Albury,1084,NS,Nam#rutrumlorem.ca
Yael,Barton,30/12/66,521 Auctor. Rd.,South Perth,2343,WA,non.cursus.non#etcommodo.co.uk
Kirk,Camacho,25/09/08,"Ap #454-7701 A, Road",Stirling,3121,WA,dictum.eu#morbitristiquesenectus.com
The hello template is a simple form obviously, just file select and submit.
Client code is under client directory.
Person defined in a file in application root.
CSV parsed as strings for now, to avoid complexity.
The records inserted look fine, retrieve by name, whatever.
Person.find().count() browser-side in console results in 40000.
Happy to send the file, which is only 1.5MB and it's random data - not sensitive.
I think call() should work as follows:
On client side
Meteor.call("insertMethod",csvPerson);
And method on server side
insertMethod: function(csvPerson){
Person.insert(csvPerson);
}
In Meteor, on some scenarios, if you don't pass a callback the operation will sync.
If you run the code Person.insert(csvPerson); on the server, the operation will be sync not async. Depending on what you want to do, you might have serious problems in the future. On the client, it won't be sync but async.
Since node.js is an event-based server, a single sync operation can halt the entire system. You've to be really about your sync operations.
For importing data, the best option is to do at server-side inside Meteor.startup(function(){ //import code goes here}).
The solution propose by Sindis works but it slow and if the browser closes (for some reason), you're not keeping a track of the already inserted records. If you use Meteor.call("insertMethod",csvPerson);, this operation will be sync on the client.
The best option on your beginner scenario (not optimal) is to:
1- While (You have record to insert)
2- Call Meteor.call without a callback
3- Count all the inserted fields in the Collection
4- Save this value to localStorage
5- Go back to step 1
This works assuming that the order of insertion is the same on every insert attempt. If you browser fails, you can always get the value from localStorage and skip that number of records.
I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)