Initially I implemented the exactly Remote Config example from that link:
https://rnfirebase.io/docs/v3.3.x/config/example
firebase.config().fetch()
.then(() => {
return firebase.config().activateFetched();
})
.then((activated) => {
if (!activated) console.log('Fetched data not activated');
return firebase.config().getValue('hasExperimentalFeature');
})
.then((snapshot) => {
const hasExperimentalFeature = snapshot.val();
if(hasExperimentalFeature) {
enableSuperCoolFeature();
}
// continue booting app
})
.catch(console.error);
My idea is not subscribe the Remote Config. I just would like to know if it have any changes at APP startup (componentWillMount).
But I see that using that code above the Remote Config variables are not updating at a new APP startup. I searched and found that info about 12 hours cache:
Remote Config caches values locally after the first successful fetch
request. By default the cache expires after 12 hours, but you can
change the cache expiration for a specific request by passing the
desired cache expiration, in seconds, to
fetchWithExpirationDuration:completionHandler: (on iOS) or fetch (on
Android).
They also NOT recommend to change that cache to a small value:
Note that if you reduce this expiration time to a very small value,
you might start hitting the client-side throttling limit, which
prevents your client from making a fetch request more than a few times
per hour.
https://firebase.google.com/support/faq/#remote-config-requests
BUT, as I have to update values at APP startup, I decided to not follow this recommendation and test. In my tests using firebase.config().fetch(0)(fetch 0 to avoid cache) the APP apparentely is not subscribing the Remote Config. There are no listeners at javascript side.
So, could I continue using firebase.config().fetch(0) without worry? The update only occurs when code runs?
How is the internal implementation of firebase.config().fetch() / snapshot?
Using cache timeout of 0 seconds is only meant for development purposes. This will not work for you in production.
As per the documentation, you can fetch 5 times in a 60 minute window - before getting throttled by the client SDK.
Does that work for you?
Related
i use minishlink/web-push for send pushes. And i make a service worker serviceworker.js for push messages, with push, notificationclick and notificationclose events listener.
To the site where the subscription takes place a have sw.js:
let timeStampInMs = new Date().getTime();
importScripts('https://super-push-site.com/serviceworker.js?ts=' + timeStampInMs);
It works fine.
But i make a new version of service worker and place it on old place (https://super-push-site.com/serviceworker.js).
How to update service worker version of my subscribers without their visit to the site where the subscription takes place?
Before answering your question, I would like to note that it's considered a bad practice to add a cache-buster parameter to your service worker URL, since the browser will already enqueue the new service worker for installation if it's byte-different to the existing service worker. You may read more about this on: The Service Worker Lifecycle
Now, to answer your actual question:
You can manually trigger the update by calling the update() method of your service worker registration, when a push message is received:
self.addEventListener('push', function (event) {
...
event.waitUntil(
Promise.all([
self.registration.showNotification(title, options);
self.registration.update()
])
);
});
You may also want to trigger self.registration.update() only if there actually is a newer version of the service worker available. To do that:
Store the version identifier of your SW in a variable.
Always send the latest SW version identifier within your push message payload.
Compare the two and trigger self.registration.update() if they don't match.
Hope this helps!
Currently developing a hybrid mobile app using ionic. When the app starts up, and a user writes to the Realtime Database for the first time, it's always delayed by around 10 or more seconds. But any subsequent writes are almost instantaneous (less than 1 second).
My calculation of delay is based on watching the database in the Firebase console.
Is this a known issue, or maybe I am doing something wrong. Please share your views.
EDIT:
The write is happening via Firebase Cloud Function.
This is the call to the Firebase Cloud function
this.http.post(url+"/favouritesAndNotes", obj, this.httpOptions)
.subscribe((data) => {
console.log(data);
},(error)=>{
console.log(error);
});
This is the actual function
app.post('/favouritesAndNotes', (request, response) => {
var db = admin.database().ref("users/" + request.body.uid);
var favourites = request.body.favourites;
var notes = request.body.notes;
if(favourites!==undefined){
db.child("favourites/").set(favourites);
}
if(notes!==undefined){
db.child("notes/").set(notes);
}
console.log("Write successfull");
response.status(200).end();
});
The first time you interact with the Firebase Database in a client instance, the client/SDK has to do quite some things:
If you're using authentication, it needs to check if the token that it has is still valid, and if not refresh it.
It needs to find the server that the database is currently hosted on.
It needs to establish a web socket connection.
Each of these may take multiple round trips, so even if you're a few hundred ms from the servers, it adds up.
Subsequent operations from the same client don't have to perform these steps, so are going to be much faster.
If you want to see what's actually happening, I recommend checking the Network tab of your browser. For the realtime database specifically, I recommend checking the WS/Web Socket panel of the Network tab, where you can see the actual data frames.
I am currently working with Firebase Cloud Functions, doing an HTTP Request through Functions. The HTTP request is being done by a 3G Module, and I need to always read a value change in the Database.
This system is to be used in an application that, as soon as there is a change in the DB, I should notify the 3G module, so currently I am doing it with an HTTP request.
exports.moduleRequest = functions.https.onRequest((req, res) => {
var change = admin.database().ref('/userInfo');
////Once there is a change in any userInfo child, do something
change.once('child_changed', (snapshot) =>{
res.send(snapshot.val());
});
});
This is working perfectly fine, the problem is that I leave the HTTP request open until there is a change in the DB, so this is consuming the quota provided from Firebase in about 60 minutes.
Error: quota exceeded (CPU allocation in function invocations : per day);
check and increase your quota at https://console.cloud.google.com/iam-
admin/quotas?project=pass-
e098f&service=cloudfunctions.googleapis.com&usage=ALL. Function killed.
Do you know if there is another approach to get this system working?
I found that the easiest and best way to solve my issue is to use the REST API, as it let me do streaming through a HTTP GET request. It uses my SIMCOM SIM5320 3G module as a client, and the server then sends an event with the database update at the requested path.
A very simply scenario: in a local meteor (v 1.2.1) development environment (WebStorm), with autopublish, and insecure enabled, I have a single MongodbDB (v 3.0.4) collection 'Letters'. I wish to respond immediately to any documents being added, removed, or modified in this collection.
For this purpose, I have the following autorun function:
Template.diagram.rendered = function(){
Tracker.autorun(function () {
Letters.find({}).observe({
added: function(document) {
console.log('a new document has been added');
},
changed: function(newDocument) {
console.log('a document has been changed');
},
removed: function(document) {
console.log('a document has been removed');
}
});
})
}
When a new document is added from within the same application, I can see the console messages right away (meteor latency compensation). However, when I connect to the same MongoDB database using an external tool (Robomongo), and add, change, or remove a document within the 'Letters' collection - it takes about 6-10 seconds before this change is detected, and the corresponding console message appears in the browser.
Why does it take this long, instead of being almost instantaneous?
Once I have this question posted on meteor forums, I was pointed to a meteor blog post from 2014, which describes the oplog tailing feature, and the fact, that it is only on by default in the dev instance. This made me realize, that, by using MONGO_URL env variable with my dev application, I was forcing my meteor app to work with the MongoDB instance, which was running on my mac all the time, independently from my meteor development, and, as such, was considered as "production" by my meteor app. Once I have switched the app to work with ad-hock mongo connection / db, the oplog tailing went into effect, and I started to see immediate event propagation to the browser.
Thanks, #dburles from the meteor forums!
We have a problem with our meteor server. When we publish 300 or so items with Meteor.publish/Meteor.subscribe the server increases its memory and eventually becomes unresponsive.
We thought of:
1) monitor the number of reactive subscribtions / memory taken by an active subscription
2) make something like ,,one time publish" - ignore changes in server side collection
Any thoughts on how any of the above can be accomplished ?
Or any other tips to debug /improve meteor app performance ?
Thanks
zorlak's answer is good.
Some other things:
You can do a one-time publish by writing your own custom publisher via the this.set API, based on the code in _publishCursor. You'd do something like:
Meteor.publish("oneTimeQuery", function () {
MyCollection.find().forEach(function (doc) {
sub.added("collectionName", doc._id, doc);
});
sub.ready();
});
This does a query, sends its results down, and then never updates it again.
That said, we hope that Meteor's performance will be such that this is unnecessary!
I'd also like to add an easy way to get stats (like number of observed cursors) from an app to Meteor (exposed as an authenticated subscription) but haven't had the time yet.
As of Meteor 0.5.1, one thing you can do is to remove dependencies on the userId from the publish function. If a publish function does not depend on which user is subscribing, then Meteor will cache the db query so it doesn't get slower as more users subscribe.
See this Meteor blog post: http://meteor.com/blog/2012/11/20/meteor-051-database-scaling