Why we have to remove location listener in FusedLocationProviderClient? - fusedlocationproviderclient

I had a problem in which the location callback wasn't refreshing. So I added mFusedLocationProviderClient.removeLocationUpdates().
Why do we have to remove the location callback? Shouldn't It update itself whenever I move to a new location?
In the docs it stated this as:
This call will keep the Google Play services connection active, so
make sure to call removeLocationUpdates(LocationCallback) when you no
longer need it, otherwise you lose the benefits of the automatic
connection management.
Can someone explain this?

Related

Wait for firebase cloud trigger function to end before launching http request

Question
I have a frontend user-registration page. Once the user clicks on 'Register', a user document gets created in my firestore database. I made a 'onCreate' trigger function which listens to new user-documents getting created. This trigger function creates and updates other documents in my database (it adds extra information and creates some documents based on the user-document that just got created).
Once the user clicks on Register in my frontend, they are redirected to a new page. On initState that new page, it makes a http (cloud function) request which needs this newly created info made by my trigger function to correctly make his response. The thing is, this HTTP-request gets launched before my trigger function ends, this causes the HTTP function to error, because it cannot yet find the information needed, because the trigger function hasn't ended and hasn't fully updated the database yet.
Does anyone know how to resolve this problem? The first thing I did, which was just a very ugly and quick workaround, was to have a delay in my frontend of a few seconds, right after the onclick 'register', so that it would delay it's redirecting to the new page, and therefore delay the http-request. But this was just a temporary solution. I would like to have a real solution now. Is there a way to tell the http-request to not run while this specific trigger-function is running? Or if not possible, is there another way to architecture my functions or my frontend to prevent this? I've thought about it, and tried to research someone with the same problem, but I can't find any questions regarding this. (This scares me a little, 'cause that makes me feel like I'm missing something basic).
If you could help, thanks in advance.
What I am hearing here is that you have a classic race condition. I am understanding that your user click "Register" and that causes a new document to be added to the database. There is then a trigger on that insertion which updates other documents/fields. It is here that you have introduced parallelism and hence the race.
Your design needs to change such that there is a "signal" sent back from GCP when the updates that were performed asynchronously have completed. Since the browser doesn't receive unsolicited signals, you will have to design a solution where your browser calls back to GCP such that the call doesn't return until the asynchronous changes have been completed. One way might be to NOT have an onCreate trigger but instead have the browser explicitly call a Cloud Function to perform the updates after the initial insertion (or even a Cloud Function that performs the insertion itself). The Cloud Function should not return until the data is consistent and ready to be read back by the browser.

Why do I need to call Firebase's OnDataChange method to get a Datasnapshot?

It seems restricting that I can only get a snapshot from my database to read data from on data change. Is there something I'm missing about OnDataChange?
What if I want to populate a page with data read dynamically from my database, yet no data is changing in the database? I still need to call OnDataChange?
Firebase's onDataChange fires immediately with a snapshot of the current value in the database and subsequently whenever the data changes.
In fact, the Firebase documentation says this:
This method is triggered once when the listener is attached and again every time the data, including children, changes.
The simple answer is you can't block the User Interface with a long running task such as a database query or network request. At best the user will have an unresponsive application at worst an "application not responding exception" (ANR)crash will happen. That's why it is designed with a listener pattern. I assume this is android we are talking about, however the answer is valid for other platforms. If you are doing this on a background thread yes you could do what you are saying, in theory. I don't think firebase is designed that way.

How to make Meteor changes instant?

Meteor is supposed to pre-load a small part of Mongo client-side so that it can simulate changes to the DB. Thus making any changes to the page happen instantly while the real DB update happens in the background.
However, on my site I'm seeing a 1-2 second delay on simple actions that make changes to the DB, such as deleting a post.
Is there some extra coding that needs to be done to ensure the client-side simulation works?
As Michel Floyd Pointed out, if your meteor method is defined as server only code, there is no way to simulate the method call on the client.
Try moving Meteor method declarations into shared code, and see if that changes latency time.
also, without seeing some code and project structure the problem could be else where...
If you are using server-side method only, make sure your Mongodb has oplog tailing enabled, so that the change is picked up immediately and sent to the client. If you are using like a hosted db, like the free mlab, it's possible you have no oplog, the meteor falls back to querying the db periodically to check for changes.
But in any case, if the method is server side only, you will always have delays. Like mentioned in this thread, move the method definition outside the server folder (like /lib) só that the method becomes available on th client.

Meteor.userId lost on server changes dev reload

I'm building a somewhat big application. When changing code the server restart and force refresh on the client.
The client keep his session data, but I seem to lose the Meteor.Collection previously sync data, forcing my user to re-sync everything.
I use 0.5.7(did not see anything in 0.5.8 about that)
Is that the expected behavior or I'm I missing something?
This can be tested by adding something like that at your client start (Assuming Components is your Meteor.Collection)
console.log("Length: ", Components.find().fetch().length);
No, you're not missing anything. The collection data should be re-synced on code pushes. However, if your collection data takes more than a second or two to load in, you should look into trying to send less data to the client by creating finer-grained subscriptions that only send data the client needs at the moment.

How can I add cookies to Seaside responses without redirecting?

I'm making a small web application in Seaside. I have a login component, and after the user logs in I want to send along a cookie when the next component renders itself. Is there a way to get at the object handling the response so I can add something to the headers it will output?
I'm trying to avoid using WASession>>redirectWithCookies since it seems pretty kludgey to redirect only because I want to set a cookie.
Is there another way that already exist to add a cookie that will go out on the next response?
There is currently no built-in way to add cookies during the action/callback phase of request processing. This is most likely a defect and is noted in this issue: http://code.google.com/p/seaside/issues/detail?id=48
This is currently slated to be fixed for Seaside 2.9 but I don't know if it will even be backported to 2.8 or not.
Keep in mind that there is already (by default) a redirection between the action and rendering phases to prevent a Refresh from re-triggering the callbacks, so in the grand scheme of things, one more redirect in this case isn't so bad.
If you still want to dig further, have a look at WARenderContinuation>>handleRequest:. That's where callback processing is triggered and the redirect or rendering phase begun.
Edited to add:
The issue has now been fixed and (in the latest development code) you can now properly add cookies to the current response at any time. Simply access the response object in the current request context and add the cookie. For example, you might do something like:
self requestContext response addCookie: aCookie
This is unlikely to be backported to Seaside 2.8 as it required a fairly major shift in the way responses are handled.
I've just looked into this in depth, and the answer seems to be no. Specifically, there's no way to get at the response from the WARenderCanvas or anything it can access (it holds onto the WARenderingContext, which holds onto the WAHtmlStreamDocument, which holds onto the response's stream but not the response itself). I think it would be reasonable to give the context access to the current response, precisely to be able to set headers on it, but you asked if there was already a way, so: no.
That said, Seaside does a lot of extra redirecting, and it doesn't seem to have much impact on the user experience, so maybe the thing to do is to stop worrying about it seeming kludgey and go with the flow of the API that's already there :)

Resources