In instrumented tests, how do you make Cloud Firestore write commands succeed when disabling the network? - firebase

So I am attempting to use the Cloud Firestore offline cache ONLY as an API for my instrumentation tests, to avoid having to read and write from the server database during my integration tests.
First, in my test setup method, I call this method
protected fun setFirestoreToOfflineMode() {
Tasks.await(FirebaseFirestore.getInstance().disableNetwork())
}
Then, at the beginning of each relevant test, I use
fun givenHasTrips(vararg trips: Trip) {
GlobalScope.launch(Dispatchers.Default) {
trips.forEach {
firestoreTripApi.put(it)
}
}
}
In that put method, I have the following code:
try {
Tasks.await(tripCollection().document(tripData.id).set(tripData)),
firestoreApiTimeoutSeconds, TimeUnit.SECONDS)
Either.Right(Unit)
} catch (e: Throwable) {
Either.Left(Failure.ServerError)
}
I am calling the set() method and am waiting for a successful result in order to be able to return that the operation was a success, to update my UI afterward.
What happens is the cache DB is written correctly BUT the "set()" function times out because the database is in offline mode. I have read that Firestore only confirms a success if the Server DB was correctly written. If that is the case, I do not know if it is possible to have this call not time-out when operating strictly in offline-cache mode.
Is there a solution to have Firestore act as if the local cache database was the source of truth and return successes if placed in offline mode, just for tests?

The Task returned by the methods that modify the database (set, update, delete) only issues a callback when the data is fully committed to the cloud. There is no way to change this behavior.
What you can do instead is set up a listener to the document(s) that are expected to change, and wait for the listener to trigger. The listener will trigger even while offline.

Related

How to continue running Firebase Cloud Function after request is finished

Really bizarre that Firebase doesn't seem to work quite like typical Express app. Whatever I write in Express and copy-paste to Firebase Functions I typically get error. There is one that I can't figure out on my own though.
This endpoint is designed to start a function and live long enough to finish even longer task. That request is a webhook (send docs, we will transform them and ping you when it's done to specified another webhook). Very simplified example below:
router.post('/', (req, res) => {
try {
generateZipWithDocuments(data) // on purpose it's not async so request can return freely
res.sendStatus(201)
} catch (error) {
res.send({ error })
}
})
On my local machine it works (both pure Express app and locally emulated Firebase Functions), but in the cloud it has problems and even though I put a cavalcade of console.log() I don't get much information. No error from Firebase.
If generateZipWithDocuments() is not asynchronous res.sendStatus() will be immediately executed after it, and the Cloud Function will be terminated (and the job done by generateZipWithDocuments() will not be completed). See the doc here for more details.
You have two possibilities:
You make it asynchronous and you wait its job is completed before sending the response. You would typically use async/await for that. Note that the maximum execution time for a Cloud Function is 9 minutes.
You delegate the long time execution job to another Cloud Function and, then, you send the response. For delegating the job to another Cloud Function, you should use Pub/Sub. See Pub/Sub triggers, the sample quickstart, and this SO thread for more details on how to implement that. In the Pub/Sub triggered Function, when the job is done you can inform the user via an email, a notification, the update of a Firestore document on which you have set a listener, etc... If generateZipWithDocuments() takes a long time, it is clearly the most user friendly option.

Cosmo ChangeFeed -Errors,exceptions and Service fail scenario's

All,
I am using Change Feed Processor Library.Want to know the best way to handle service failure along with the exceptions/errors scenario's in ProcessChangesAsync method. Below are the events am referring to.
1) Service failure - Service having the processor library crashed in the middle of some operation. How to start the process from the same document(doc on failure instance)? is there any inbuilt mechanism where change feed will start with the last failed documents? E.g. Let assume,in current batch we have 10 docs.5 processed successfully and then service breaks because of network failure or by some other reasons.Will my process starts with 6th document once service is re-started? How to achieve this?
2) Exception and Errors- Any errors in ProcessChangesAsync method can be handle using try catch at the global level but how to persist those failure records and make them available for the next batch? Again,looking for any available inbuilt mechanism in change feed process.
1) The Processor Library, by default, checkpoints after a successful run of ProcessChangesAsync. In the latest library version, you can customize the Checkpointer to do manual checkpoints in case you need it. If for some reason the processor shuts down before checkpointing, then it will start processing next from the the last successful checkpoint stored in the Leases collection. In your case, it will start with the first document again, so you will never lose a change but you could experience double processing (this is an "at least once" model).
2) There is no built-in mechanism that you can leverage, handling exceptions within the ProcessChangesAsync is your responsibility. You could not only add a global try/catch but, in the case you are looping over the documents, add a try/catch inside the loop, to handle a failing document (maybe send it to queue for later analysis/post-process) without losing the batch. If you require logging for those errors (I'm assuming that's what you mean by persisting errors?), then the latest version is compatible with LibLog, so plugging your own custom logging is as simple as:
using Microsoft.Azure.Documents.ChangeFeedProcessor.Logging;
var hostName = "SampleHost";
var tracelogProvider = new TraceLogProvider(); //You can use any provider supported by LibLog
using (tracelogProvider.OpenNestedContext(hostName))
{
LogProvider.SetCurrentLogProvider(tracelogProvider);
// After this, create IChangeFeedProcessor instance and start/stop it.
}
Source
Extra info for the comments
To avoid exceptions halting the batch or causing a batch to be reprocessed, you can have handling like this:
public async Task ProcessChangesAsync(IChangeFeedObserverContext context, IReadOnlyList<Document> documents, CancellationToken cancellationToken)
{
try
{
foreach(var document in documents)
{
try
{
// Do your work for the document
}
catch(Exception ex)
{
// Something happened with the current document, handle it, send it to a queue / another storage to analyze, log it. This catch will make the loop continue with the next.
}
}
}
catch(Exception ex)
{
// Something unhandled happened, log it and avoid throwing it again so the next batch is processed
}
}

Using Firestore from a JobIntentService: Failed to gain exclusive lock to the Firestore client's offline persistence

Whenever I exit the app while I have an alarm set and the alarm goes off while the app is "DEAD" I get an Exception while trying to update a field in Firestore.
The code works when the app is running in the foreground so I really have no clue of what is going on. Either way, here is the code for 2 functions which get called from the JobIntentService which is in turn created from a BroadcastReceiver:
private val firestoreInstance: FirebaseFirestore by lazy { FirebaseFirestore.getInstance() }
fun updateTaskCompletedSessions(taskDocRefPath: String, completedSessions: Int){
val taskDocRef = firestoreInstance.document(taskDocRefPath)
taskDocRef.get().addOnSuccessListener { documentSnapshot ->
documentSnapshot.reference
.update(mapOf("completedSessions" to completedSessions))
}
}
fun updateTaskSessionProgress(taskDocRefPath: String, sessionProgress: String){
val taskDocRef = firestoreInstance.document(taskDocRefPath)
taskDocRef.get().addOnSuccessListener { documentSnapshot ->
documentSnapshot.reference
.update(mapOf("sessionProgress" to sessionProgress))
}
}
The full error goes as follows:
Failed to gain exclusive lock to the Firestore client's offline persistence.
This generally means you are using Firestore from multiple processes in your app. Keep in mind that multi-process Android apps execute the code in your Application class in all processes, so you may need to avoid initializing Firestore in your Application class. If you are intentionally using Firestore from multiple processes, you can only enable offline persistence (i.e. call setPersistenceEnabled(true)) in one of them.
I will appreciate any help. Thank you!
I'm happy to announce that I found a solution! I was using two consequently firing JobIntentServices - one for completedSessions, the other for sessionProgress. (Bad design, I know...)
When I played around with it and made just ONE JobIntentService to call both of these functions, the exception is gone which makes perfect sense.

Detect local data update in Firebase

When I call setValue while my user is offline in my Android app, Firebase provides a snapshot of the change to my listener from the local store,even though the change hasn't been committed to the server. I understand that Firebase will make an effort to sync with the server when connectivity is restored, but if the app restarts that change is lost forever.
I've already got a custom local store for all my data, and I was hoping I could keep track of uncommitted changes myself, clearing a dirty bit when I hear back from the server of a successful setValue. But that doesn't seem possible if the local store is pretending the date is committed.
I don't think I want to use disk persistence (since I already have my own). Is there a way to tell when the update is from the local store vs a real server commit? Or maybe I should use a completion listener to clear the dirty bit?
I understand that Firebase will make an effort to sync with the server when connectivity is restored, but if the app restarts that change is lost forever.
The default behavior is indeed that the pending writes are kept in memory. You can however easily change that by enabling disk persistence.
Firebase.getDefaultConfig().setPersistenceEnabled(true);
But since you indicate you don't want to use that, the alternative is to use a CompletionListener to detect when the changes have been committed to the database on Firebase's servers:
ref.setValue("I'm writing data", new Firebase.CompletionListener() {
#Override
public void onComplete(FirebaseError firebaseError, Firebase firebase) {
if (firebaseError != null) {
System.out.println("Data could not be saved. " + firebaseError.getMessage());
} else {
System.out.println("Data saved successfully.");
}
}
});
Note that completion listeners are not persisted to disk. So even if you enable dis persistence, the completion callbacks will not fire when pending writes are fulfilled after an app restart. So don't try to mix disk persistence or completion listeners.

How to use transactions in Cloud Datastore

I want to use Datastore from Cloud Compute through Java and I am following Getting started with Google Cloud Datastore.
My use case is quite standard - read one entity (lookup), modify it and save the new version. I want to do it in a transaction so that if two processes do this, the second one won't overwrite the changes made by the first one.
I managed to issue a transaction and it works. However I don't know what would happen if the transaction fails:
How to identify a failed transaction? Probably a DatastoreException with some specific code or name will be thrown?
Should I issue a rollback explicitly? Can I assume that if a transaction fails, nothing from it will be written?
Should I retry?
Is there any documentation on that?
How to identify a failed transaction? Probably a DatastoreException
with some specific code or name will be thrown?
Your code should always ensure that a transaction is either successfully committed or rolled back. Here's an example:
// Begin the transaction.
BeginTransactionRequest begin = BeginTransactionRequest.newBuilder()
.build();
ByteString txn = datastore.beginTransaction(begin)
.getTransaction();
try {
// Zero or more transactional lookup()s or runQuerys().
// ...
// Followed by a commit().
CommitRequest commit = CommitRequest.newBuilder()
.setTransaction(txn)
.addMutation(...)
.build();
datastore.commit(commit);
} catch (Exception e) {
// If a transactional operation fails for any reason,
// attempt to roll back.
RollbackRequest rollback = RollbackRequest.newBuilder()
.setTransaction(txn);
.build();
try {
datastore.rollback(rollback);
} catch (DatastoreException de) {
// Rollback may fail due to a transient error or if
// the transaction was already committed.
}
// Propagate original exception.
throw e;
}
An exception might be thrown by commit() or by another lookup() or runQuery() call inside the try block. In each case, it's important to clean up the transaction.
Should I issue a rollback explicitly? Can I assume that if a
transaction fails, nothing from it will be written?
Unless you're sure that the commit() succeeded, you should explicitly issue a rollback() request. However, a failed commit() does not necessarily mean that no data was written. See the note on this page.
Should I retry?
You can retry using exponential backoff. However, frequent transaction failures may indicate that you are attempting to write too frequently to an entity group.
Is there any documentation on that?
https://cloud.google.com/datastore/docs/concepts/transactions

Resources