I am a beginner and I am developing a scheduler salon at Firebase.
The scheduler keeps customer information such as name, phone number, date. Within the scheduling node, there is a node relating to the status of service, it makes it impossible for two clients to mark the same time. For this, I need to access the regarding the status node and check if the value is equal to "ocupado" (occupied) or "desocupado" (unoccupied). If the value is equal to "ocupado" (occupied), there must be return an error message. If the value is equal to "desocupado" (unoccupied), the client can make the schedule at this time.
Just acess val(), for example:
var ref = firebase.database().ref("db/scheduler");
ref.once('value').then(function(snapshot){
var check = snapshot.child("firstTime").val();
var returnMsg = '';
if(check == "occupied"){
returnMsg = "error";
}else{
returnMsg = 'open';
}
console.log(returnMsg);
})
Related
I'm now implement a program to migrate large amount of data to ADX base on Ingest from Storage feature of ADX and I'm need to check that status of each ingestion request each time the request finish but I'm facing an issue
Base on MS document in here
If I set the persistDetails = true for example with the command below it must save the ingestion status but currently this setting seem not work (with or without it)
.ingest async into table MigrateTable
(
h'correct blob url link'
)
with (
jsonMappingReference = 'table_mapping',
format = 'json',
persistDetails = true
)
Above command will return an OperationId and when I using it to check export status when the ingest task finish I always get this error message :
Error An admin command cannot be executed due to an invalid state: State='Operation 'DataIngestPull' does not persist its operation results' clientRequestId: KustoWebV2;
Can someone clarify for me what is the root cause relate to this? With me it seem like a bug relate to ADX
Ingesting data directly against the Data Engine, by running .ingest commands, is usually not recommended, compared to using Queued Ingestion (motivation included in the link). Using Kusto's ingestion client library allows you to track the ingestion status.
Some tools/services already do that for you, and you can consider using them directly. e.g. LightIngest, Azure Data Factory
If you don't follow option 1, you can still look for the state/status of your command using the operation ID you get when using the async keyword, by using .show operations
You can also use the client request ID to filter the result set of .show commands to view the state/status of your command.
If you're interested in looking specifically at failures, .show ingestion failures is also available for you.
The persistDetails option you specified in your .ingest command actually has no effect - as mentioned in the docs:
Not all control commands persist their results, and those that do usually do so by default on asynchronous executions only (using the async keyword). Please search the documentation for the specific command and check if it does (see, for example data export).
============ Update sample code follow suggestion from Yoni ========
Turn out, other member in my team mess up with access right with adx, after fixing it everything work fine
I just have one concern relate to PartiallySucceeded that need clarify from #yoni or someone have better knowledge relate to that
try
{
var ingestProps = new KustoQueuedIngestionProperties(model.DatabaseName, model.IngestTableName)
{
ReportLevel = IngestionReportLevel.FailuresAndSuccesses,
ReportMethod = IngestionReportMethod.Table,
FlushImmediately = true,
JSONMappingReference = model.IngestMappingName,
AdditionalProperties = new Dictionary<string, string>
{
{"jsonMappingReference",$"{model.IngestMappingName}" },
{ "format","json"}
}
};
var sourceId = Guid.NewGuid();
var clientResult = await IngestClient.IngestFromStorageAsync(model.FileBlobUrl, ingestProps, new StorageSourceOptions
{
DeleteSourceOnSuccess = true,
SourceId = sourceId
});
var ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
while (ingestionStatus.Status == Status.Pending)
{
await Task.Delay(WaitingInterval);
ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
}
if (ingestionStatus.Status == Status.Succeeded)
{
return true;
}
LogUtils.TraceError(_logger, $"Error when ingest blob file events, error: {ingestionStatus.ErrorCode.FastGetDescription()}");
return false;
}
catch (Exception e)
{
return false;
}
I'm using firebase admin sdk in my cloud functions and I'm getting error randomly in some executions when trying to get a user by uid .
let userRecord = await admin.auth().getUser(userId);
The error details are:
{"error":{"code":400,"message":"TOO_MANY_ATTEMPTS_TRY_LATER",
"errors":[{ "message":"TOO_MANY_ATTEMPTS_TRY_LATER",
"domain":"global","reason":"invalid"}]
}
}
My cloud function executes on a real time database write and can be triggered for multiple users. In total I have 4 auth function calls in one execution first is the above one, second call is to again get user by uid or email, third call is generateEmailVerificationLink and the last call is generatePasswordResetLink.
I have checked the rate limits in documentation for auth but there is no mention of rate limit for these operation. Also the error TOO_MANY_ATTEMPTS_TRY_LATER was only mentioned in REST API for sign up with email password.
If this error is due to rate limit what should I change to prevent this error given these 4 calls are necessary for the operation needed on database write?.
EDIT:
I have identified the actual call which is throwing too many attempts error. The calls auth().generateEmailVerificationLink() and auth().generatePasswordResetLink() throw this error when called too many times.
I called these two in loop with 100 iterations and waited for the promises. The first executions finishes without any errors i.e. 200 requests. But starting second execution as soon as the first one ends will throw the error of too many attempts. So I think these two calls have limit. Now I'm trying to reduce these calls and reuse the link information. Other calls like getUserByEmail works fine.
let promises = [];
let auth = admin.auth();
let hrstart = process.hrtime()
for (let i = 0; i < 100; i++) {
promises.push(auth.getUserByEmail("user email"));
promises.push(auth.generateEmailVerificationLink("user email", {url: `https://app.firebaseapp.com/path`}));
promises.push(auth.generatePasswordResetLink("user email", {url: `https://app.firebaseapp.com/path`}));
}
Promise.all(promises)
.then(value => {
let hrend = process.hrtime(hrstart);
console.log(hrend);
// console.log(value)
});
The error was specifically in the operation auth.createEmailLink. This function has following limit: 20QPS/I.P address where QPS is (query per second). This limit can be increased by submitting the use case to Firebase.
I got this information from firebase support after submitting my issue.
Link to my github issue: https://github.com/firebase/firebase-admin-node/issues/458
I was way under 20QPS but was receiving this exception. In fact, it would always throw the TOO_MANY_ATTEMPTS_TRY_LATER exception on the 2nd attempt.
It turned out to be usage of FirebaseAuth.DefaultInstance instead of instantiating a static instance thusly:
In class definition:
private readonly FirebaseApp _firebase;
In class constructor:
_firebase = FirebaseAdmin.FirebaseApp.Create();
In function:
var auth = FirebaseAuth.GetAuth(_firebase);
var actionCodeSettings = new ActionCodeSettings()
{
...
};
var link = await auth.GenerateEmailVerificationLinkAsync(email, actionCodeSettings);
return link;
In addition to the answer mentioned in https://stackoverflow.com/a/54782967/5515861, I want to add another solution if you found this issue while trying to create custom email verification.
Inspired by the response in this GitHub isssue https://github.com/firebase/firebase-admin-node/issues/458#issuecomment-933161448 .
I am also seeing this issue. I have not ran admin.auth().generateEmailVerificationLink in over 24hrs (from anywhere else or any user at all) and called it just now only one time (while deployed in the prod functions environment) and got this 400 TOO_MANY_ATTEMPTS_TRY_LATER error ...
But, the client did also call the Firebase.auth.currentUser.sendEmailVerification() method around same time (obviously different IP).
Could that be the issue?
My solution to this issue is by adding a retry. e.g.
exports.sendWelcomeEmail = functions.runWith({failurePolicy: true}).auth.user().onCreate(async (user) => {
functions.logger.log("Running email...");
const email = user.email;
const displayName = user.displayName;
const link = await auth.generateEmailVerificationLink(email, {
url: 'https://mpj.io',
});
await sendWelcomeEmail(email, displayName, link);
});
The .runWith({failurePolicy: true}) is key.
It s giving you an error because your cloud functions/backend call the generateEmailVerificationLink while at the same time the default behaviour of the Firebase is also doing the same and it is counted as 20QPS. It some weird Google Rate Limit accounting rule. So my solution is just to add a retry.
The Downside is, it is calling twice, so if the call is billable, it might be billable twice.
I've one firebase database instance and I would like to add a counter to a certain node.
Everytime the users run an specific action I would like to increment the node value. How to do that without getting synchronization problems? How to use google functions to do that?
Ex.:
database{
node {
counter : 0
}
}
At certain time 3 different users read the value on counter, and try to increment it. As they read at exact same time all of them read "0" and incremented to "1", but the desired value at end of execution should be "3" since it was read 3 times
==================update===================
#renaud pointed to use transactions to keep synchronization on of the saved data, but i have another scenario where i need the synchronization done on read side also:
ex.
the user read the actual value, acording to it does a different action and finishing by incrementing one...
in a sql like enviorement i would write a procedure for doing that, because doesn't matter what user will do with the info i will finish always by incrementing one
If i did understand #renaud answer right, in that scenario 4 different users reading the database at same time would get 0 as current value, then on transaction update the final stored value would be 4, but on client side each of them just read 0
You have to use a Transaction in this case, see https://firebase.google.com/docs/database/web/read-and-write#save_data_as_transactions and also https://firebase.google.com/docs/reference/js/firebase.database.Reference#transaction
A Transaction will "ensure there are no conflicts with other clients writing to the same location at the same time."
In a Cloud Function you could write your code along the following lines, for example:
....
const counterRef = admin
.database()
.ref('/node/counter');
return counterRef
.transaction(current_value => {
return (current_value || 0) + 1;
})
.then(counterValue => {
if (counterValue.committed) {
//For example update another node in the database
const updates = {};
updates['/nbrOfActionsExecuted'] = counterValue.snapshot.val();
return admin
.database()
.ref()
.update(updates);
}
})
or simply the following if you just want to update the counter (Since a transaction returns a Promise, as explained in the second link referred to above):
exports.testTransaction = functions.database.ref('/path').onWrite((change, context) => {
const counterRef = admin
.database()
.ref('/node/counter');
return counterRef
.transaction(current_value => {
return (current_value || 0) + 1;
});
});
Note that, in this second case, I have used a Realtime Database trigger as an example of trigger.
You can get the child count via
firebase_node.once('value', function(snapshot) { alert('Count: ' + snapshot.numChildren()); });
But I believe this fetches the entire sub-tree of that node from the server. For huge lists, that seems RAM and latency intensive. Is there a way of getting the count (and/or a list of child names) without fetching the whole thing?
The code snippet you gave does indeed load the entire set of data and then counts it client-side, which can be very slow for large amounts of data.
Firebase doesn't currently have a way to count children without loading data, but we do plan to add it.
For now, one solution would be to maintain a counter of the number of children and update it every time you add a new child. You could use a transaction to count items, like in this code tracking upvodes:
var upvotesRef = new Firebase('https://docs-examples.firebaseio.com/android/saving-data/fireblog/posts/-JRHTHaIs-jNPLXOQivY/upvotes');
upvotesRef.transaction(function (current_value) {
return (current_value || 0) + 1;
});
For more info, see https://www.firebase.com/docs/transactions.html
UPDATE:
Firebase recently released Cloud Functions. With Cloud Functions, you don't need to create your own Server. You can simply write JavaScript functions and upload it to Firebase. Firebase will be responsible for triggering functions whenever an event occurs.
If you want to count upvotes for example, you should create a structure similar to this one:
{
"posts" : {
"-JRHTHaIs-jNPLXOQivY" : {
"upvotes_count":5,
"upvotes" : {
"userX" : true,
"userY" : true,
"userZ" : true,
...
}
}
}
}
And then write a javascript function to increase the upvotes_count when there is a new write to the upvotes node.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.countlikes = functions.database.ref('/posts/$postid/upvotes').onWrite(event => {
return event.data.ref.parent.child('upvotes_count').set(event.data.numChildren());
});
You can read the Documentation to know how to Get Started with Cloud Functions.
Also, another example of counting posts is here:
https://github.com/firebase/functions-samples/blob/master/child-count/functions/index.js
Update January 2018
The firebase docs have changed so instead of event we now have change and context.
The given example throws an error complaining that event.data is undefined. This pattern seems to work better:
exports.countPrescriptions = functions.database.ref(`/prescriptions`).onWrite((change, context) => {
const data = change.after.val();
const count = Object.keys(data).length;
return change.after.ref.child('_count').set(count);
});
```
This is a little late in the game as several others have already answered nicely, but I'll share how I might implement it.
This hinges on the fact that the Firebase REST API offers a shallow=true parameter.
Assume you have a post object and each one can have a number of comments:
{
"posts": {
"$postKey": {
"comments": {
...
}
}
}
}
You obviously don't want to fetch all of the comments, just the number of comments.
Assuming you have the key for a post, you can send a GET request to
https://yourapp.firebaseio.com/posts/[the post key]/comments?shallow=true.
This will return an object of key-value pairs, where each key is the key of a comment and its value is true:
{
"comment1key": true,
"comment2key": true,
...,
"comment9999key": true
}
The size of this response is much smaller than requesting the equivalent data, and now you can calculate the number of keys in the response to find your value (e.g. commentCount = Object.keys(result).length).
This may not completely solve your problem, as you are still calculating the number of keys returned, and you can't necessarily subscribe to the value as it changes, but it does greatly reduce the size of the returned data without requiring any changes to your schema.
Save the count as you go - and use validation to enforce it. I hacked this together - for keeping a count of unique votes and counts which keeps coming up!. But this time I have tested my suggestion! (notwithstanding cut/paste errors!).
The 'trick' here is to use the node priority to as the vote count...
The data is:
vote/$issueBeingVotedOn/user/$uniqueIdOfVoter = thisVotesCount, priority=thisVotesCount
vote/$issueBeingVotedOn/count = 'user/'+$idOfLastVoter, priority=CountofLastVote
,"vote": {
".read" : true
,".write" : true
,"$issue" : {
"user" : {
"$user" : {
".validate" : "!data.exists() &&
newData.val()==data.parent().parent().child('count').getPriority()+1 &&
newData.val()==newData.GetPriority()"
user can only vote once && count must be one higher than current count && data value must be same as priority.
}
}
,"count" : {
".validate" : "data.parent().child(newData.val()).val()==newData.getPriority() &&
newData.getPriority()==data.getPriority()+1 "
}
count (last voter really) - vote must exist and its count equal newcount, && newcount (priority) can only go up by one.
}
}
Test script to add 10 votes by different users (for this example, id's faked, should user auth.uid in production). Count down by (i--) 10 to see validation fail.
<script src='https://cdn.firebase.com/v0/firebase.js'></script>
<script>
window.fb = new Firebase('https:...vote/iss1/');
window.fb.child('count').once('value', function (dss) {
votes = dss.getPriority();
for (var i=1;i<10;i++) vote(dss,i+votes);
} );
function vote(dss,count)
{
var user='user/zz' + count; // replace with auth.id or whatever
window.fb.child(user).setWithPriority(count,count);
window.fb.child('count').setWithPriority(user,count);
}
</script>
The 'risk' here is that a vote is cast, but the count not updated (haking or script failure). This is why the votes have a unique 'priority' - the script should really start by ensuring that there is no vote with priority higher than the current count, if there is it should complete that transaction before doing its own - get your clients to clean up for you :)
The count needs to be initialised with a priority before you start - forge doesn't let you do this, so a stub script is needed (before the validation is active!).
write a cloud function to and update the node count.
// below function to get the given node count.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.userscount = functions.database.ref('/users/')
.onWrite(event => {
console.log('users number : ', event.data.numChildren());
return event.data.ref.parent.child('count/users').set(event.data.numChildren());
});
Refer :https://firebase.google.com/docs/functions/database-events
root--|
|-users ( this node contains all users list)
|
|-count
|-userscount :
(this node added dynamically by cloud function with the user count)
I am having problem with a function in IndexedDB, where I need to change the status of some meetings. The Search feature which meetings are checked by grabbing the ID of each one of them, soon after I A for() where I retrace the vector that contains the ids for each database access do I get a different passing the id of the time. The following code example:
var val = [];
var checkbox = $('input:checkbox[class^=checkReunioes]:checked');
if(checkbox.length > 0){
checkbox.each(function(){
val.push($(this).val());
});
}
for(var i = 0; i < val.length; i++){
var transaction = db.transaction(["tbl_REUNIOES"], "readwrite").objectStore("tbl_REUNIOES");
var request = transaction.get(val[i]);
request.onerror = function(event) {
alert("BAD");
};
request.onsuccess = function(event) {
var data = request.result;
data.FLG_STATU_REUNI = 'I';
var codigo_igreja = localStorage.getItem("igreja");
var dataJSON = JSON.stringify(data);
enviarFilaSincronismo("tbl_REUNIOES", "U", dataJSON, " WHERE COD_IDENT_REUNI = '" + val[i] + "' and COD_IDENT_IGREJ = '" + codigo_igreja + "'");
var requestUpdate = transaction.put(data);
requestUpdate.onerror = function(event) {
alert("OK");
};
requestUpdate.onsuccess = function(event) {
$("#listReunioes").html("");
serchAll(w_key_celula);
};
};
}
In my view the problem is occurring due to be a bank indexeddb asynchronous, it passes to the next search, even before the first stop.
But how can I do to confer this ?
What is the good practice for something in this case ?.
If you are inexperienced with writing asynchronous code, a good general rule to consider is to never define functions inside loops. Do not set request.onsuccess to a function from within the for loop.
You can perform multiple get and put requests on the same transaction when you do not expect the individual requests to fail for data-related reasons, such as the violation of a uniqueness constraint of an index, or because you are performing many thousands of requests on the same transaction and reaching processing limits.
You might find that using IDBObjectStore.prototype.openCursor together with IDBCursor.prototype.update is more convenient than using IDBObjectStore.prototype.get and IDBObjectStore.prototype.put.
Your example code indicates that a successful get request means that data was retrieved, when in fact, this is not what actually happens. A successful get request just means that a request occurred without errors (e.g. against an object store that exists, against a database that is not blocked by other requests, against a database connection that is still valid). It does not mean that an object matched your get request query. You should be checking for whether the request's result object is defined, and use that check as a determination of whether an object matched your get query, and not simply that a successful request occurred.
You might want to spend more time organizing your code into smaller functions that use clearer names. Your example code is difficult to read.
It looks like you are using some type of global db variable. If you are not well experienced with writing asynchronous code, avoid using a global db variable. There is no guarantee the db variable will be defined and open when you decide to access it, which could lead to an unexpected error.