How can I prevent postgres deadlocks when running jest tests on CircleCI? - integration-testing

When I run my tests on CircleCI, it logs the following message a many times and eventually the tests fail because none of the database methods can retrieve the data due to the deadlocks:
{
"message": "Error running raw sql query in pool.",
"stack": "error: deadlock detected\n at Connection.Object.<anonymous>.Connection.parseE (/home/circleci/backend/node_modules/pg/lib/connection.js:567:11)\n at Connection.Object.<anonymous>.Connection.parseMessage (/home/circleci/-backend/node_modules/pg/lib/connection.js:391:17)\n at Socket.<anonymous> (/home/circleci/backend/node_modules/pg/lib/connection.js:129:22)\n at emitOne (events.js:116:13)\n at Socket.emit (events.js:211:7)\n at addChunk (_stream_readable.js:263:12)\n at readableAddChunk (_stream_readable.js:250:11)\n at Socket.Readable.push (_stream_readable.js:208:10)\n at TCP.onread (net.js:597:20)",
"name": "error",
"length": 316,
"severity": "ERROR",
"code": "40P01",
"detail": "Process 1000 waits for AccessExclusiveLock on relation 17925 of database 16384; blocked by process 986.\nProcess 986 waits for RowShareLock on relation 17870 of database 16384; blocked by process 1000.",
"hint": "See server log for query details.",
"file": "deadlock.c",
"line": "1140",
"routine": "DeadLockReport",
"level": "error",
"timestamp": "2018-10-15T20:54:29.221Z"
}
This is the test command I run: jest --logHeapUsage --forceExit --runInBand
I also tried this: jest --logHeapUsage --forceExit --maxWorkers=2
Pretty much all of the tests run some sort of database function. This issue only started to occur when we added more tests. Has anyone else had this same issue?

Based on the error message we got Deadlock because of RowShareLock;
This means that two transactions (lets call them transactionOne and transactionTwo) have locked resurce which the other transaction requires
Example:
transactionOne locks record in UserTable with userId = 1
transactionTwo locks record in UserTable with userId = 2
transactionOne attempts to update in UserTable for userId = 2, but since it is locked by another transaction - it waits for the lock to be released
transactionTwo attempts to update in UserTable for userId = 1, but since it is locked by another transaction - it waits for the lock to be released
Now the SQL engine detects that there is a deadlock and randomly picks one of the transactions and terminates it.
Lets say the SQL engine picks transactionOne and terminates it. This will result in the exception that is posted in the question.
transactionTwo is now allowed to perform an update in UserTable for user with userId = 1.
transactionTwo completes with success
SQL engines are pretty fast in detecting deadlocks, and the exception will be instant.
This is the reason for the deadlocks.
Deadlocks can have different root causes.
I see you use the pg plugin. Make sure you use it right with the transactions: pg node-postgres transactions
I would suspect a few different root causes and their solutions:
Cause 1: Multiple tests are running against the same database instance
It may be different ci pipelines executing the same test against the same Postgres instance
Solution:
This is the least probable situation, but the CI pipeline should provision its own separate Postgres instance on each run.
Cause 2: Transactions are not handled with appropriate catch("ROLLBACK")
This means that some transactions may stay alive and block others.
Solution: All transactions should have appropriate error handling.
const client = await pool.connect()
try {
await client.query('BEGIN')
//do what you have to do
await client.query('COMMIT')
} catch (e) {
await client.query('ROLLBACK')
throw e
} finally {
client.release()
}
Cause 3: Concurrency. For example: Tests are running in parallel, and they cause deadlocks.
We are writing scalable apps. This means that the deadlocks are inevitable. We have to be prepared for them and handle those appropriately.
Solution: Use the strategy "Let's try again". When we detect in our code that there is a deadlock exception, we just retry finite times. This approach has been proven with all my production apps for more than a decade.
Solution with helper func :
//Sample deadlock wrapper
const handleDeadLocks = async (action, currentAttepmt = 1 , maxAttepmts = 3) {
try {
return await action();
} catch (e) {
//detect it is a deadlock. Not 100% sure whether this is deterministic enough
const isDeadlock = e.stack?.includes("deadlock detected");
const nextAttempt = currentAttepmt + 1;
if (isDeadlock && nextAttempt <= maxAttepmts) {
//try again
return await handleDeadLocks(action, nextAttempt, maxAttepmts);
} else {
throw e;
}
}
}
//our db access functions
const updateUserProfile = async (input) => {
return handleDeadLocks(async () => {
//do our db calls
});
};
If the code becomes to complex/ nested. We can try to do it with another solution using High order function
const handleDeadLocksHOF = (funcRef, maxAttepmts = 3) {
return async (...args) {
const currentAttepmt = 1;
while (currentAttepmt <= maxAttepmts) {
try {
await funcRef(...args);
} catch (e) {
const isDeadlock = e.stack?.includes("deadlock detected");
if (isDeadlock && currentAttepmt + 1 < maxAttepmts) {
//try again
currentAttepmt += 1;
} else {
throw e;
}
}
}
}
}
// instead of exporting the updateUserProfile we should export the decorated func, we can control how many retries we want or keep the default
// old code:
export const updateUserProfile = (input) => {
//out legacy already implemented data access code
}
// new code
const updateUserProfileLegacy = (input) => {
//out legacy already implemented data access code
}
export const updateUserProfile = handleDeadLocksHOF(updateUserProfile)

Related

Firestore Transactions is not handling race condition

Objective
User on click a purchase button on the web frontend, it will send a POST request to the backend to create a purchase order. First, it will check the number of available stocks. If available is greater than 0, reduce available by 1 and then create the order.
The setup
Backend (NestJS) queries the Firestore for the latest available value, and reduce available by 1. For debugging, I will return the available value.
let available;
try {
await runTransaction(firestore, async (transaction) => {
const sfDocRef = doc(collection(firestore, 'items_available'), documentId);
const sfDoc = await transaction.get(sfDocRef);
if (!sfDoc.exists()) {
throw 'Document does not exist!';
}
const data = sfDoc.data();
available = data.available;
if(available>0){
transaction.update(sfDocRef, {
available: available-1,
});
}
});
} catch (e) {
console.log('Transaction failed: ', e);
}
return { available };
My stress test setup
Our goal is to see all API requests having different available value, this would mean that Firestore Transactions is reducing the value even though there are multiple requests coming in.
I wrote a simple multi-threaded program that queries the backend's create order API, it will query the available value and return the available value. This program will save the available value returned for each API request.
The stress test performed is about 10 transactions per second, as I have 10 concurrent processes querying the backend. Each process will http.get 20 queries:
const http = require('http');
function call(){
http.get('http://localhost:5000/get_item_available', res => {
let data = [];
res.on('data', chunk => {
data.push(chunk);
});
res.on('end', () => {
console.log('Response: ', Buffer.concat(data).toString());
});
}).on('error', err => {
console.log('Error: ', err.message);
});
}
for (var i=0; i<20; i++){
call();
}
The problem
Unfortunately, the available values I got from the requests contains repeated values, that is, having same available values instead of having unique available values.
What is wrong? Isn't Firestore Transactions meant to handle race conditions? Any suggestions on what I could change to handle multiple requests hitting the server and return a new value for each request?
You have a catch clause to handle when the transaction fails, but then still end up returning a value to the caller return { available }. In that situation you should return an error to the caller.

Problem with saving data in Transaction in Firebase Firestore for Flutter

I have a problem with transactions in my web application created in Flutter. For database I use Firebase Firestore where I save documents via transaction.
Dependency:
cloud_firestore: 3.1.1
StudentGroup is my main document. It has 4 stages and each of them has 3-5 tasks. (Everything is in 1 document). I have to store game timer, so every 10 seconds I make an request to save time for current stage. (Every stage has different timer). I have a problem with saving task, because "Sometimes" when 2 requests are made in the same time, then I get some weird state manipulation.
Task is updated and "isFinished" is set to true
Timer is updated to correct value (with this update somehow previous task update is lost, "isFinished" is set to false
This is how I save task.
Future<Result> saveTask({required String sessionId, required String studentGroupId,
required Task task}) async {
print("trying to save task <$task>.");
try {
return await _firebaseFirestore.runTransaction((transaction) async {
final studentGroupRef = _getStudentGroupDocumentReference(
sessionId: sessionId,
studentGroupId: studentGroupId
);
final sessionGroupDoc = await studentGroupRef.get();
if (!sessionGroupDoc.exists) {
return Result.error("student group not exists");
}
final sessionGroup = StudentGroup.fromSnapshot(sessionGroupDoc);
sessionGroup.game.saveTask(task);
transaction.set(studentGroupRef, sessionGroup.toJson());
})
.then((value) => taskFunction(true))
.catchError((error) => taskFunction(false));
} catch (error) {
return Result.error("Error couldn't save task");
}
}
This is how I save my time
Future<Result> updateTaskTimer({required String sessionId,
required String studentGroupId, required Duration duration}) async {
print("trying to update first task timer");
try {
return await _firebaseFirestore.runTransaction((transaction) async {
final studentGroupRef = _getStudentGroupDocumentReference(
sessionId: sessionId,
studentGroupId: studentGroupId
);
final sessionGroupDoc = await studentGroupRef.get();
if (!sessionGroupDoc.exists) {
return Result.error("student group not exists");
}
final sessionGroup = StudentGroup.fromSnapshot(sessionGroupDoc);
switch (sessionGroup.game.gameStage) {
case GameStage.First:
sessionGroup.game.stages.first.duration = duration.inSeconds;
break;
case GameStage.Second:
sessionGroup.game.stages[1].duration = duration.inSeconds;
break;
case GameStage.Third:
sessionGroup.game.stages[2].duration = duration.inSeconds;
break;
case GameStage.Fourth:
sessionGroup.game.stages[3].duration = duration.inSeconds;
break;
case GameStage.Fifth:
sessionGroup.game.stages[4].duration = duration.inSeconds;
break;
}
transaction.set(
studentGroupRef,
sessionGroup.toJson(),
SetOptions(merge: true)
);
print("Did I finish task 4? ${sessionGroup.game.stages.first.tasks[3].isFinished}");
})
.then((value) => timerFunction(true))
.catchError((error) => timerFunction(false));
} catch (error) {
return Result.error("Error couldn't update task timer");
}
}
timerFunction and taskFunction print some messages in console and return Result.error or Result.success (for now it returns bool)
I don't know If I am doing something wrong with Firebase Firestore Transaction. I would like to have atomic operations for reading and writing data.
Transactions ensure atomicity - which means that if the transaction succeeds then all the reads and writes occur in a non-overlapping way with other transactions. This prevents the type of problem you are describing.
But this doesn't work if you spread your reads and writes over multiple transactions. In particular, it looks to me like you are writing a task which was obtained from outside the transaction. Instead, you should use ids or similar to track which documents you need to update, then do a read and a write inside the transaction.Alternatively firebase also provides Batched Writes, which specify the specific properties you want to update. These will ensure that any other properties are not changed.For batch writes example you can refer to the link

Am I doing Firestore Transactions correct?

I've followed the Firestore documentation with relation to transactions, and I think I have it all sorted correctly, but in testing I am noticing issues with my documents not getting updated properly sometimes. It is possible that multiple versions of the document could be submitted to the function in a very short interval, but I am only interested in only ever keeping the most recent version.
My general logic is this:
New/Updated document is sent to cloud function
Check if document already exists in Firestore, and if not, add it.
If it does exist, check that it is "newer" than the instance in firestore, if it is, update it.
Otherwise, don't do anything.
Here is the code from my function that attempts to accomplish this...I would love some feedback if this is correct/best way to do this:
const ocsFlight = req.body;
const procFlight = processOcsFlightEvent(ocsFlight);
try {
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const originalFlight = await ocsFlightRef.get();
if (!originalFlight.exists) {
const response = await ocsFlightRef.set(procFlight);
console.log("Record Added: ", JSON.stringify(procFlight));
res.status(201).json(response); // 201 - Created
return;
}
await db.runTransaction(async (t) => {
const doc = await t.get(ocsFlightRef);
const flightDoc = doc.data();
if (flightDoc.recordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
console.log("Record Updated: ", JSON.stringify(procFlight));
res.status(200).json("Record Updated");
return;
}
console.log("Record isn't newer, nothing changed.");
console.log("Record:", JSON.stringify("Same Flight:", JSON.stringify(procFlight)));
res.status(200).json("Record isn't newer, nothing done.");
return;
});
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500).json(error.message);
}
The Bugs
First, you are trusting the value of req.body to be of the correct shape. If you don't already have type assertions that mirror your security rules for /collection/someFlightId in processOcsFlightEvent, you should add them. This is important because any database operations from the Admin SDKs will bypass your security rules.
The next bug is sending a response to your function inside the transaction. Once you send a response back the client, your function is marked inactive - resources are severely throttled and any network requests may not complete or crash. As a transaction may be retried a handful of times if a database collision is detected, you should make sure to only respond to the client once the transaction has properly completed.
You use set to write the new flight to Firestore, this can lead to trouble when working with transactions as a set operation will cancel all pending transactions at that location. If two function instances are fighting over the same flight ID, this will lead to the problem where the wrong data can be written to the database.
In your current code, you return the result of the ocsFlightRef.set() operation to the client as the body of the HTTP 201 Created response. As the result of the DocumentReference#set() is a WriteResult object, you'll need to properly serialize it if you want to return it to the client and even then, I don't think it will be useful as you don't seem to use it for the other response types. Instead, a HTTP 201 Created response normally includes where the resource was written to as the Location header with no body, but here we'll pass the path in the body. If you start using multiple database instances, including the relevant database may also be useful.
Fixing
The correct way to achieve the desired result would be to do the entire read->check->write process inside of a transaction and only once the transaction has completed, then respond to the client.
So we can send the appropriate response to the client, we can use the return value of the transaction to pass data out of it. We'll pass the type of the change we made ("created" | "updated" | "aborted") and the recordModified value of what was stored in the database. We'll return these along with the resource's path and an appropriate message.
In the case of an error, we'll return a message to show the user as message and the error's Firebase error code (if available) or general message as the error property.
// if not using express to wrangle requests, assert the correct method
if (req.method !== "POST") {
console.log(`Denied ${req.method} request`);
res.status(405) // 405 - Method Not Allowed
.set("Allow", "POST")
.end();
return;
}
const ocsFlight = req.body;
try {
// process AND type check `ocsFlight`
const procFlight = processOcsFlightEvent(ocsFlight);
const ocsFlightRef = db.collection(collection).doc(procFlight.fltId);
const { changeType, recordModified } = await db.runTransaction(async (t) => {
const flightDoc = await t.get(ocsFlightRef);
if (!flightDoc.exists) {
t.set(ocsFlightRef, procFlight);
return {
changeType: "created",
recordModified: procFlight.recordModified
};
}
// only parse the field we need rather than everything
const storedRecordModified = flightDoc.get('recordModified');
if (storedRecordModified <= procFlight.recordModified) {
t.update(ocsFlightRef, procFlight);
return {
changeType: "updated",
recordModified: procFlight.recordModified
};
}
return {
changeType: "aborted",
recordModified: storedRecordModified
};
});
switch (changeType) {
case "updated":
console.log("Record updated: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Updated",
recordModified,
changeType
});
return;
case "created":
console.log("Record added: ", JSON.stringify(procFlight));
res.status(201).json({ // 201 - Created
path: ocsFlightRef.path,
message: "Created",
recordModified,
changeType
});
return;
case "aborted":
console.log("Outdated record discarded: ", JSON.stringify(procFlight));
res.status(200).json({ // 200 - OK
path: ocsFlightRef.path,
message: "Record isn't newer, nothing done.",
recordModified,
changeType
});
return;
default:
throw new Error("Unexpected value for 'changeType': " + changeType);
}
} catch (error) {
console.log("Error:", JSON.stringify(error));
res.status(500) // 500 - Internal Server Error
.json({
message: "Something went wrong",
// if available, prefer a Firebase error code
error: error.code || error.message
});
}
References
Cloud Firestore Transactions
Cloud Firestore Node SDK Reference
HTTP Event Cloud Functions

Firestore update on server only

When retrieving data from Firestore one has the option of forcing retrieval from the server. The default option is cache and server, as determined by Firestore.
I have a certain usage where a command and control node is issuing real-time commands to remote nodes backed by Firestore. This requires the updates to be done on the server (or fail) so that the C&C node has certainty on the execution (or failure) in real-time. What I would like to do is to disable use of cache with these updates. I have not found a way to do that. Is this possible with current capabilities of Firestore?
Note that it is not desirable to disable Firestore caching at a global level as the cache is beneficial in other situations.
----EDIT-----
Based on the responses I have created this update method that attempts to force updating the server using a transaction.
A couple of notes:
This is dart code.
Utils.xyz is an internal library and in this case it is being used to log.
I have reduced the network speed for the test to simulate a bad network connection.
The timeout is set to 5 seconds.
Here is the output of my log:
I/flutter (22601): [2021-06-06 22:35:30] [LogLevel.DEBUG] [FirestoreModel] [update] [We are here!]
I/flutter (22601): [2021-06-06 22:35:47] [LogLevel.DEBUG] [FirestoreModel] [update] [We are here!]
I/flutter (22601): [2021-06-06 22:36:02] [LogLevel.DEBUG] [FirestoreModel] [update] [We are here!]
I/flutter (22601): [2021-06-06 22:37:18] [LogLevel.DEBUG] [FirestoreModel] [update] [We are here!]
I/flutter (22601): [2021-06-06 22:37:20] [LogLevel.INFO] [FirestoreModel] [update] [Transaction successful in 110929ms.]
Firebase completely ignores the timeout of 5 seconds; tries to update 4 times each time ~15 seconds apart and is finally successful after 110 seconds. I am after a real-time response within seconds (5 sec) or failure.
Future<void> update(
Map<String, dynamic> data, {
WriteBatch batch,
Transaction transaction,
bool forceServer = false,
}) async {
// If updating there must be an id.
assert(this.id != null);
// Only one of batch or transaction can be non-null.
assert(batch == null || transaction == null);
// When forcing to update on server no transaction or batch is allowed.
assert(!forceServer || (batch == null && transaction == null));
try {
if (forceServer) {
DateTime start = DateTime.now();
await FirebaseFirestore.instance.runTransaction(
(transaction) async {
await update(data, transaction: transaction);
Utils.logDebug('We are here!');
},
timeout: Duration(seconds: 5),
);
Utils.logDebug('Transaction successful in ${DateTime.now().difference(start).inMilliseconds}ms.');
} else {
DocumentReference ref =
FirebaseFirestore.instance.collection(collection).doc(this.id);
if (batch != null)
batch.update(ref, data);
else if (transaction != null)
transaction.update(ref, data);
else
await ref.update(data);
}
} catch (e, s) {
Utils.logException('Error updating document $id in $collection.', e, s);
// Propagate the error.
rethrow;
}
}
This requires the updates to be done on the server (or fail)
For that you could use Transactions and batched writes.
Transactions will fail when the client is offline.
Check out doc
To get live data from the server once, you would use:
firebase.firestore()
.doc("somecollection/docId")
.get({ source: "server" })
.then((snapshot) => {
// if here, snapshot.data() is from the server
// TODO: do something with data
})
.catch((err) => {
// if here, get() encountered an error (insufficient permissions, server not available, etc)
// TODO: handle the error
});
To get realtime live data from only the server (ignoring the cache), you would use:
const unsubscribe = firebase.firestore()
.doc("somecollection/docId")
.onSnapshot({ includeMetadataChanges: true }, {
next(snapshot) {
// ignore cache data
if (snapshot.metadata.fromCache) return;
// if here, snapshot.data() is from the server
// TODO: do something with data
},
error(err) {
// if here, onSnapshot() encountered an error (insufficient permissions, etc)
// TODO: handle the error
}
});
To write to the server, you would use the normal write operations - delete(), set(), and update(); as they all return Promises that will not resolve while the client is offline. If they have resolved, the data stored on the server has been updated.
To test if you are online or not, you can try and pull a non existant document down from the server like so:
/**
* Attempts to fetch the non-existant document `/.info/connected` to determine
* if a connection to the server is available.
* #return {Promise<boolean>} promise that resolves to a boolean indicating
* whether a server connection is available
*/
function isCurrentlyOnline() {
// unlike RTDB, this data doesn't exist and has no function
// must be made readable in security rules
return firebase.firestore()
.doc(".info/connected")
.get({ source: "server" })
.then(
() => {
// read data successfully, we must be online
return true;
}, (err) => {
// failed to read data, if code is unavailable, we are offline
// for any other error, rethrow it
if (err.code === "unavailable")
return false;
throw err;
}
);
}
/**
* A function that attaches a listener to when a connection to Firestore has
* been established or when is disconnected.
*
* This function listens to the non-existant `/.info/connected` document and
* uses it's `fromCache` metadata to **estimate** whether a connection to
* Firestore is currently available.
* **Note:** This callback will only be invoked after the first successful
* connection to Firestore
*
* #param {((error: unknown | null, isOnline: boolean) => unknown)} callback the
* callback to invoke when the isOnline state changes
* #return {(() => void)} a function that unsubscribes this listener when
* invoked
*/
function onOnline(callback) {
let hasConnected = false;
// unlike RTDB, this data doesn't exist and has no function
// must be made readable in security rules
return firebase.firestore()
.doc(".info/connected")
.onSnapshot(
{ includeMetadataChanges: "server" },
{
next(snapshot) {
const { fromCache } = snapshot.metadata;
if (!hasConnected) {
if (fromCache) return; // ignore this event
hasConnected = true;
}
callback(null, !fromCache);
},
error(err) {
callback(err);
}
}
);
}

How to implement transactions in Meteor Method calls

Suppose I have 2 collections "PlanSubscriptions" and "ClientActivations". I am serially doing a insert on both the collections.
Later one depends on previous one, if any of the transaction fails then the entire operation must rollback.
How can I achieve that in Meteor 1.4?
Since MongoDB doesn't support atomicity, you will have to manage it with Method Chaining.
You can write a method, say, transaction where you will call PlanSubscriptions.insert(data, callback). Then in the callback function you will call ClientActivations.insert(data, callback1) if the first insertion is success and in callback1 return truthy if second insertion is succes, otherwise falsy. If the first insertion returns error you don't need to do anything, but if the second insertion returns error then remove the id got from the insertion in first collection.
I can suggest following structure:
'transaction'(){
PlanSubscriptions.insert(data, (error, result)=>{
if(result){
// result contains the _id
let id_plan = result;
ClientActivations.insert(data, (error, result)=>{
if(result){
// result contains the _id
return true;
}
else if(error){
PlanSubscriptions.remove(id_plan);
return false;
}
})
}
else if(error){
return false;
}
})
}
There is no way to do that in Meteor, since mongodb is not an ACID-compliant database. It has a single-document update atomicity, but not a multiple-document one, which is your case with the two collections.
From the mongo documentation:
When a single write operation modifies multiple documents, the modification of each document is atomic, but the operation as a whole is not atomic and other operations may interleave.
A way to isolate the visibility of your multi-document updates is available, but it's probably not what you need.
Using the $isolated operator, a write operation that affects multiple documents can prevent other processes from interleaving once the write operation modifies the first document. This ensures that no client sees the changes until the write operation completes or errors out.
An isolated write operation does not provide “all-or-nothing” atomicity. That is, an error during the write operation does not roll back all its changes that preceded the error.
However, there are a couple of libraries which try to tackle the problem at the app-level. I recommend taking a look at fawn
In your case, where you have exactly two dependent collections, it's possible to take advantage of the two phase commits technique. Read more about it here: two-phase-commits
Well I figured it out myself.
I added a package babrahams:transactions
At server side Meteor Method call, I called tx Object that is globally exposed by the package. The overall Server Side Meteor.method({}) looks like below.
import { Meteor } from 'meteor/meteor';
import {PlanSubscriptions} from '/imports/api/plansubscriptions/plansubscriptions.js';
import {ClientActivations} from '/imports/api/clientactivation/clientactivations.js';
Meteor.methods({
'createClientSubscription' (subscriptionData, clientActivationData) {
var txid;
try {
txid = tx.start("Adding Subscription to our database");
PlanSubscriptions.insert(subscriptionData, {tx: true})
ClientActivations.insert(activation, {tx: true});
tx.commit();
return true;
} catch(e){
tx.undo(txid);
}
return false;
}
});
With every insert I had added {tx : true}, this concluded it to be a apart of transaction.
Server Console Output:
I20170523-18:43:23.544(5.5)? Started "Adding Subscription to our database" with
transaction_id: vdJQvFgtyZuWcinyF
I20170523-18:43:23.547(5.5)? Pushed insert command to stack: vdJQvFgtyZuWcinyF
I20170523-18:43:23.549(5.5)? Pushed insert command to stack: vdJQvFgtyZuWcinyF
I20170523-18:43:23.551(5.5)? Beginning commit with transaction_id: vdJQvFgtyZuWcinyF
I20170523-18:43:23.655(5.5)? Executed insert
I20170523-18:43:23.666(5.5)? Executed insert
I20170523-18:43:23.698(5.5)? Commit reset transaction manager to clean state
For more Information you can goto link : https://github.com/JackAdams/meteor-transactions
NOTE: I am using Meteor 1.4.4.2
Just sharing this link for future readers:
https://forums.meteor.com/t/solved-transactions-with-mongodb-meteor-methods/48677
import { MongoInternals } from 'meteor/mongo';
// utility async function to wrap async raw mongo operations with a transaction
const runTransactionAsync = async asyncRawMongoOperations => {
// setup a transaction
const { client } = MongoInternals.defaultRemoteCollectionDriver().mongo;
const session = await client.startSession();
await session.startTransaction();
try {
// running the async operations
let result = await asyncRawMongoOperations(session);
await session.commitTransaction();
// transaction committed - return value to the client
return result;
} catch (err) {
await session.abortTransaction();
console.error(err.message);
// transaction aborted - report error to the client
throw new Meteor.Error('Database Transaction Failed', err.message);
} finally {
session.endSession();
}
};
import { runTransactionAsync } from '/imports/utils'; // or where you defined it
Meteor.methods({
async doSomething(arg) {
// remember to check method input first
// define the operations we want to run in transaction
const asyncRawMongoOperations = async session => {
// it's critical to receive the session parameter here
// and pass it to every raw operation as shown below
const item = await collection1.rawCollection().findOne(arg, { session: session });
const response = await collection2.rawCollection().insertOne(item, { session: session });
// if Mongo or you throw an error here runTransactionAsync(..) will catch it
// and wrap it with a Meteor.Error(..) so it will arrive to the client safely
return 'whatever you want'; // will be the result in the client
};
let result = await runTransactionAsync(asyncRawMongoOperations);
return result;
}
});

Resources