What is the minimum cache length I can specify without request rejection in Remote Config? - firebase

I have been trying to figure out whether I can utilize Remote Config to set maintenance mode in a React Native project without caching issues.
To bypass caching fully, you can pass a value of 0. Be warned Firebase may start to reject your requests if values are requested too frequently.
According to the Remote Config documentation like above, it seems like I am able to specify my own cache length as 0 but it can possibly cause request rejections.
Then, I wonder if it is okay to set a value other than 0 such as 30000 to the minimumFetchIntervalMillis property to avoid rejections, or is there any certain minimum cache length to do so?
Thank you in advance.
Soo

It sounds like you're looking for information on Firebase's rate limiting structure.
You can find their explanation here (at time of writing): https://firebase.google.com/docs/functions/quotas
You'll fall within one of several pricing categories - you can use this to predict your costs based on your usage. Firebase is built to scale, so they should be able to handle a good amount of traffic.

Related

How to handle Caching before it has been set and multiple users access the website at the same time

I have set up Caching for my website, which expires after an hour.
So my problem is if the Cache does not exist and Multiple users access the website at the same time I would like to avoid all the users making the same request at the same time. As this has an impact on the CPU usage to be at 100% for a longer period of time.
I am using System.Runtime.Caching.MemoryCache.
MVC ASP.NET application.
I have thought of a solution but I am not sure how to best implement it, my thoughts are,
One of the many users will first come in first, and start a request and I set a flag to say fetching data and then after any more users come in they will be shown no cache has been set but before starting the request they will check the flag if the request has already been triggered. If it has the application should wait until a response has come back (is this possible?), and then use the response from the cache.
This way One request is sent and it will be quicker response from the service and it will be a quicker response and CPU usage will still be quite low.
Please do suggest an alternative to this, my idea could be wrong
Can someone please advise?
Thanks

API rate limits on calling getSyncChunk()

Given that Evernote don't publish their exact API rate limits (at least I can't find them), I'd like to ask for some guidance on it's usage.
I'm creating an application that will sync the user's notes and store them locally. I'm using getFilteredSyncChunk to do this.
I'd like to know how often I can make this API call without hitting the limits. I understand that the limits are on a per-user basis, so would it be acceptable to call this every 5 minutes to get the latest notes?
TIA
The rate limit is on a per API key basis. You'll be okay calling getFilteredSyncChunk every five minutes, although it's a little more efficient to call getSyncState instead.
In case you haven't seen it yet, check out this guide for info on sync (accessible from this page).

What is wrong with alfresco.cache.immutableEntityTransactionalCache?

I have this in my log:
2016-01-07 12:22:38,720 WARN [alfresco.cache.immutableEntityTransactionalCache] [http-apr-8080-exec-5] Transactional update cache 'org.alfresco.cache.immutableEntityTransactionalCache' is full (10000).
and I do not want to just increase this parameter without knowing what is really going on and having better insights of alfresco caches best practices!
FYI:
The warning appears when I list the element from document library root folder in a site. Note that the site does have ~300 docs/folder at that level, several of which are involved in current workflows and I am getting all of them in one single call (Client-side paging)
I am using an Alfresco CE 4.2.c instance with around 8k nodes
I ve seen this in my logs whenever you do a "big" transaction. By that I mean making a change to 100+ files in a batch.
Quoting Axel Faust:
The performance degredation is the reason that log message is a warning. When the transactional cache size is reached, the cache handling can no longer handle the transaction commit properly and before any stale / incorrect data is put into the shared cache, it will actually empty out the entire shared cache. The next transaction(s) will suffer bad performance due to cache misses...
Cache influence on Xmx depends on what the cache does unfortunately. The property value cache should have little impact since it stores granular values, but the node property cache would have quite a different impact as it stores the entire property map. I only have hard experience data from node cache changes and for that we calculated additional need of 3 GiB for an increase to four-times the standard cache size
It is very common to get these warnings.
I do not think that it is a good idea to change the default settings.
Probably you can try to change your code, if possible.
As described in this link to the alfresco forum by one of the Alfresco engineer, the value suggested by Alfresco are "sane". They are designed to work well in standard cases.
You can decide to change them, but you have to be careful because you can get lower performances than what you would get doing nothing.
I would suggest to investigate why your use of this webscript is causing the cache overflow and check if you can do something about it. The fact that you are retrieving 300 documents/folders in the same time, it is likely to be the cause.
In the following article you can find how to troubleshoot and solve issues with the cache.
Alfresco cache tuning
As described in that article, I would suggest to increase the log level for ehcache:
org.alfresco.repo.cache.EhCacheTracerJob=DEBUG
Or selectively adding the name of the cache that you want to monitor.

appfabric caching failure exceptions=getandlock requests for session state

I'm using the session provider in an asp.net app with a 3 host appfabric cluster.
The cluster is version 3 and is running on windows server 2008.
My sessions cluster has secondaries set to 1 and min secondaries set to 0.
When I look at the cache statistics I notice a very large (disproportionate) number in the miss count category. In fact it almost equals the request count category. So with this I decided to look at the performance counters to figure out why the session provider does not seem to be able to hold the object correctly or why it keeps missing.
What I found was that the getandlocks/sec are identical to the failure exceptions/sec counters. It's also constantly running which is not normal considering that there is only so much traffic being generated by our staff. The object count is not large but the rejection rate is clearly much higher than the number of objects that should be coming out of it. I'm not writing or modify that much info to the session, for the most part it doesn't change but clearly I'm getting a significantly larger number of requests than my users can create.
Any help is welcome.
PS.
Ideally I'd love to know what these failure exceptions say but there is no way to capture them it seems.

Outputcache - how to determine optimal value for duration?

I read somewhere that for a high traffic site (I guess that is a murky term as well), 30 - 60 seconds is a good value. Obviously I could do a load test and vary the values, but I couldn't find any kind of documentation on this. Most samples have a minute, a couple of minutes. There's no recommended range. Is there something on msdn or anywhere that talks about this?
This all depends on whether or not the content changes frequently. For slowly or non-mutating content, a longer value works perfectly. However, you may need to shorten the value for always-changing data or risk bad output.
It all depends on how often a user requests your resource, and how big the resource is.
First, it is important to understand that when you cache something, that resource will remain the same until the cache duration runs out. A short duration cache will tax the webserver more than longer one, but the short will provide more up-to-date data should the requested resource change.
Obviously you want to cache database queries as much as possible, prioritizing those who are called often. But all cache takes memory on the server, and as resources runs low the cache will be evicted. Take this into consideration when caching large things for longer durations.
If you want data on how often users requests a resource you can use Google Analytics, which is extremely easy to set up.
For very exhausitive analytics you can use Kiwik. It requires a local server though.
On very changing resources, don't cache at all, unless it's really really resource heavy and isn't vital to be realtime updated.
To give you an exact number or recommendation would be to make you a disservice, there are too many variables around.

Resources