Google Analytics API | Handling Quota limits of 100 req per 100 seconds - google-analytics

So far I have more than 200 google analytic accounts to pull information. I've tested my solution with a restrict number of accounts (only 10).
Once I put all the account ids in my logic it gives me errors of quota limit 100 requests per 100 seconds.
foreach (var viewID in _viewIds)
{
foreach (var dateRange in dateRanges)
{
tTaskList.Add(Task.Run(async () =>
{
try
{ var tReports = await gc.PostAsyncTask(url, reportRequest);
foreach (var report in tReports.reports)
{
if (report != null)
{
//All report logic here
//(...)
bool hasNextPage = false;
do
{
//All pagination logic here
//(...)
}while(hasNextPage);
}
}
}
catch (Exception ex)
{
//Write in Log
var ex_message = Common.Utils.GetExceptionMessage(ex);
Logger.WriteConsoleLog(ex_message.MessageString, (int)Logger.Logs.ERROR, APIName, RequestID);
}
})
); //end add task
}
}
How can I control the number of requests? To avoid getting these errors?

quota limit 100 requests per 100 seconds.
Its basically flood protection you are going to fast. You can max make 100 requests with in 100 seconds. What you can do and probably the simplest solution would be to Implementing Exponential Backoff
I went a step further in my application though. I created a request que (max 100 items in it). I log the time I make a request down to the millisecond. Before I send a request i check the time difference between the first and the last item in the que if its more then say 95 seconds i pause it for the difference. This has not completely stopped me from getting these errors but its has reduced them drastically.

Related

Server performance question about streaming from cosmos dB

I read the article here about IAsyncEnumerable, more specifically towards a Cosmos Db-datasource
public async IAsyncEnumerable<T> Get<T>(string containerName, string sqlQuery)
{
var container = GetContainer(containerName);
using FeedIterator<T> iterator = container.GetItemQueryIterator<T>(sqlQuery);
while (iterator.HasMoreResults)
{
foreach (var item in await iterator.ReadNextAsync())
{
yield return item;
}
}
}
I am wondering how the CosmosDB is handling this, compared to paging, lets say 100 documents at the time. We have had some "429 - Request rate too large"-errors in the past and I dont wish to create new ones.
So, how will this affect server load/performance.
I dont see a big difference from the servers perspective, between when client is streaming (and doing some quick checks), and old way, get all document and while (iterator.HasMoreResults) and collect the items in a list.
The SDK will retrieve batches of documents that can be adjusted in size using the QueryRequestOptions and changing the MaxItemCount (which defaults to 100 if not set). It has no option though to throttle the RU usage apart from it running into the 429 error and using the retry mechanism the SDK offers to retry a while later. Depending on how generous you set the retry mechanism it'll retry oft & long enough to get a proper response.
If you have a situation where you want to limit the RU usage for e.g. there's multiple processes using your cosmos and you don't want those to result in 429 errors you would have to write the logic yourself.
An example of how something like that could look:
var qry = container
.GetItemLinqQueryable<Item>(requestOptions: new() { MaxItemCount = 2000 })
.ToFeedIterator();
var results = new List<Item>();
var stopwatch = new Stopwatch();
var targetRuMsRate = 200d / 1000; //target 200RU/s
var previousElapsed = 0L;
var delay = 0;
stopwatch.Start();
var totalCharge = 0d;
while (qry.HasMoreResults)
{
if (delay > 0)
{
await Task.Delay(delay);
}
previousElapsed = stopwatch.ElapsedMilliseconds;
var response = await qry.ReadNextAsync();
var charge = response.RequestCharge;
var elapsed = stopwatch.ElapsedMilliseconds;
var delta = elapsed - previousElapsed;
delay = (int) ((charge - targetRuMsRate * delta) / targetRuMsRate);
foreach (var item in response)
{
results.Add(item);
}
}
Edit:
Internally the SDK will call the underlying Cosmos REST API. Once your code reaches the iterator.ReadNextSync() it will call the query documents method in the background. If you would dig into the source code or intercept the message send to HttpClient you can observe the resulting message which lacks the x-ms-max-item-count header that determines the number of the documents it'll try to retrieve (unless you have specified a MaxItemCount yourself). According to the Microsoft Docs it'll default to 100 if not set:
Query requests support pagination through the x-ms-max-item-count and x-ms-continuation request headers. The x-ms-max-item-count header specifies the maximum number of values that can be returned by the query execution. This can be between 1 and 1000, and is configured with a default of 100.

Xamarin Offline Sync with AzureMobileServices: Initial offline load incredibly slow

I'm successfully using Azure Mobile Services and Xamarin Forms to perform CRUD operations on an SQL DB hosted with Azure. The offline sync portion stores the data in an SQLite db on the phone. There's been a few bumps along the way to get it working as smoothly as we have it now, but this remains to be the last hurdle.
Problem
When the device has no connection (tested using Airplane mode on a variety of physical and emulated devices) - the first time it goes to access any of the offline data, it takes a very long time to return anything. This is the case if the data exists in the SQLite DB or not.
There is no exception thrown, or anything that I can see printed to the logs that indicates what the delay might be.
To give an idea, a PullAsync() on 20 rows might take 5 seconds while online, and that data is stored to the SQLite DB. After putting the device into offline mode, that same operation may take up to 60 seconds. These numbers are quite arbitrary, but the delay is noticeably much too long.
To add to this, this long load only occurs the very first time any Offline Sync method is called. After that, every method is near instant, as I would expect it to be - but why not the first time?
Expected Result
I would expect that because the data is stored on the device already, and no internet connection can be detected, it should return the data almost instantly.
Code
Sync Class
The GetPolicies() method is where the delay would occur.
This is a sample of one of the components. Every other component is the same format, but different data.
IMobileServiceSyncTable<policy_procedure> policyTable = SyncController.policyTable;
public async Task<List<policy_procedure>> GetPolicies(string companyId)
{
//SemaphoreSlim
await SyncController.dbOperation.WaitAsync();
try
{
await SyncController.Initialize();
await policyTable.PullAsync("policy_procedure", policyTable.Where(p => p.fk_company_id == companyId).Where(p=> p.signature!=null || p.signature!=""));
return await policyTable.ToListAsync();
}
catch (Exception ex)
{
//For some reason, when this method is called and the device is offline, it will fall into this catch block.
//I assume this is standard for offline sync, as it's trying to do a pull with no connection, causing it to fail.
//Through using breakpoints, the delay occurs even before it reaches this catch statement.
Console.WriteLine(ex);
return await policyTable.ToListAsync();
}
finally
{
SyncController.dbOperation.Release();
}
}
Sync Controller
public static SemaphoreSlim dbOperation = new SemaphoreSlim(1, 1);
public static MobileServiceClient client;
public static MobileServiceSQLiteStore store;
public static async Task Initialize()
{
try
{
//This line is not standard for Offline Sync.
//The plugin returns true or false for the devices current connectivity.
//It's my attempt to see if there is a connection, to eliminate the load time.
//This does immediately take it back to the try statement in GetPolicies
if (!CrossConnectivity.Current.IsConnected)
return;
if (client ? .SyncContext ? .IsInitialized ? ? false)
return;
client = new MobileServiceClient(AppSettings.azureUrl);
var path = "local.db"; //Normally uses company ID,
path = Path.Combine(MobileServiceClient.DefaultDatabasePath, path);
store = new MobileServiceSQLiteStore(path);
/************************/
#
region Table Definitions in local SQLite DB
//Define all the tables in the sqlite db
..
store.DefineTable < policy_procedure > ();
..#endregion
await client.SyncContext.InitializeAsync(store);
/************/
#
region Offline Sync Tables
..
policyTable = client.GetSyncTable < policy_procedure > ();
..#endregion
}
catch (Exception ex)
{
Console.WriteLine(ex)
}
}
What I've Tried
Well, I'm not too sure what's even causing this, so most of my attempts have been around forcing an exception before this wait time occurs, so that it can fall out of the GetPolicies try-catch, as the wait time appears to be on the PullAsync.
My most recent attempt at this is commented in the code above (SyncController), where I use James Montemagno's Connectivity Plugin to detect the phones network connectivity. (I've tested this separately, and this works correctly without delay.)
The short story is that you don't want to call PullAsync in your GetPolicies method if your device is offline. For example, you could do
try
{
await SyncController.Initialize();
if (CrossConnectivity.Current.IsConnected)
{
await policyTable.PullAsync("policy_procedure", policyTable.Where(p => p.fk_company_id == companyId).Where(p=> p.signature!=null || p.signature!=""));
}
return await policyTable.ToListAsync();
}
but you will also want to handle the case where this is the first time the app runs and so you don't have any records yet.

signalR hub is taking too much time to load

Retrieval of /signalr/hubs gets very slow after 5-10 minutes, I've to restart app pool again and again. Is there any way to cache this?
What I've done-
Checked all the memory and cpu allocations for App pool but couldn't
find anything.
Searched on google but didn't find anything relevant
You can try putting this jquery functions in you View Page.
var tryingToReconnect = false;
$.connection.hub.reconnecting(function() {
tryingToReconnect = true;
});
$.connection.hub.reconnected(function() {
tryingToReconnect = false;
});
$.connection.hub.disconnected(function() {
if(tryingToReconnect) {
notifyUserOfDisconnect(); // Your function to notify user.
}
});
Also check if you network connection is slow.
$.connection.hub.connectionSlow(function() {
notifyUserOfConnectionProblem(); // Your function to notify user.
});
You'll get exact idea about the problem weather issue is because of signalR or not.
Hope this helps.

retrieving data from firebase database (localhost) in cloud functions takes too long (sometimes)

I have 50 000 user in database. I need to query data.(efficiently)
However sometimes it takes too long to load data. Mostly when I do not call functions for longer time. (2 minutes).
Why is this happening?
How can I improve speed of query?
//index.js
exports.userNearBy = functions.https.onRequest((request, response) => {
var time1 = time.time();
var i = 0;
admin.database().ref('users/').on('value',function(snapshot){
snapshot.forEach(function(child){
i++;
if(i==49999){
console.log(time.time()-time1);
response.status(200).send("ok");
}
});
});
console log

Webaudio api: change the sample rate

Is it possible to change the sampling rate of the recorded wave file without using third-party software and websites , and in the js?
If the recorder.js set the frequency of 44100
worker.postMessage ({
      command: 'init',
      config: {
        sampleRate: 44100
      }
} ) ;
 is written with the same frequency , and if you reduce it to 22050 , the length of the file will be 2 times more recorded and will be slow to reproduce, while increasing the speed of playback , the recording will sound fine.Actually the question whether it is possible to change the sample rate already contain files and how to do it?
The only way I found so far is a small resample library xaudio.js, part of the speex.js library. Works pretty nice. I use it to convert audio from native format to 8Khz Mono.
For anyone interested... Because typed arrays are transferables, you can send them to a web worker, and down sample, then send it back or to a server or wherever.
//get audio from user and send it to a web worker
function recordUser(argument) {
//
var audioCtx = new AudioContext();
var worker = new Worker('downsampler.js');
// Create a ScriptProcessorNode with a bufferSize of 512 and a single input and no output channel
var scriptNode = audioCtx.createScriptProcessor(512, 1, 0);
console.log(scriptNode.bufferSize);
// Give the node a function to process audio events
scriptNode.onaudioprocess = function(audioProcessingEvent) {
var inputBuffer = audioProcessingEvent.inputBuffer;
console.log(inputBuffer.getChannelData(0));
worker.postMessage(inputBuffer.getChannelData(0));
}
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(mediaStream) {
var mediaStreamSource = audioCtx.createMediaStreamSource(mediaStream);
mediaStreamSource.connect(scriptNode);
})
.catch(function(err) { console.log(err.name + ": " + err.message); });
}
The web worker is something like this. If you want to send it to a server, use a websocket. Otherwise, use post message to transfer the data back to the client. You'll need to add an event listener client side as well, so search "mdn WebWorker" to read up on that.
//example worker that sends the data to both a web socket and back to the user
var ws = new WebSocket('ws://localhost:4321');
ws.binaryType = 'arraybuffer';
self.addEventListener('message', function(e) {
var data = e.data;
var sendMe = new Float32Array(data.length/16);
for(var i = 0; i * 16 < data.length; i++) {
sendMe[i] = data[i*16];
}
//send to server
ws.send(sendMe);
//or send back to user
self.postMessage(sendMe)
}, false);

Resources