Asynchronous with indexeddb problems - asynchronous

I am having problem with a function in IndexedDB, where I need to change the status of some meetings. The Search feature which meetings are checked by grabbing the ID of each one of them, soon after I A for() where I retrace the vector that contains the ids for each database access do I get a different passing the id of the time. The following code example:
var val = [];
var checkbox = $('input:checkbox[class^=checkReunioes]:checked');
if(checkbox.length > 0){
checkbox.each(function(){
val.push($(this).val());
});
}
for(var i = 0; i < val.length; i++){
var transaction = db.transaction(["tbl_REUNIOES"], "readwrite").objectStore("tbl_REUNIOES");
var request = transaction.get(val[i]);
request.onerror = function(event) {
alert("BAD");
};
request.onsuccess = function(event) {
var data = request.result;
data.FLG_STATU_REUNI = 'I';
var codigo_igreja = localStorage.getItem("igreja");
var dataJSON = JSON.stringify(data);
enviarFilaSincronismo("tbl_REUNIOES", "U", dataJSON, " WHERE COD_IDENT_REUNI = '" + val[i] + "' and COD_IDENT_IGREJ = '" + codigo_igreja + "'");
var requestUpdate = transaction.put(data);
requestUpdate.onerror = function(event) {
alert("OK");
};
requestUpdate.onsuccess = function(event) {
$("#listReunioes").html("");
serchAll(w_key_celula);
};
};
}
In my view the problem is occurring due to be a bank indexeddb asynchronous, it passes to the next search, even before the first stop.
But how can I do to confer this ?
What is the good practice for something in this case ?.

If you are inexperienced with writing asynchronous code, a good general rule to consider is to never define functions inside loops. Do not set request.onsuccess to a function from within the for loop.
You can perform multiple get and put requests on the same transaction when you do not expect the individual requests to fail for data-related reasons, such as the violation of a uniqueness constraint of an index, or because you are performing many thousands of requests on the same transaction and reaching processing limits.
You might find that using IDBObjectStore.prototype.openCursor together with IDBCursor.prototype.update is more convenient than using IDBObjectStore.prototype.get and IDBObjectStore.prototype.put.
Your example code indicates that a successful get request means that data was retrieved, when in fact, this is not what actually happens. A successful get request just means that a request occurred without errors (e.g. against an object store that exists, against a database that is not blocked by other requests, against a database connection that is still valid). It does not mean that an object matched your get request query. You should be checking for whether the request's result object is defined, and use that check as a determination of whether an object matched your get query, and not simply that a successful request occurred.
You might want to spend more time organizing your code into smaller functions that use clearer names. Your example code is difficult to read.
It looks like you are using some type of global db variable. If you are not well experienced with writing asynchronous code, avoid using a global db variable. There is no guarantee the db variable will be defined and open when you decide to access it, which could lead to an unexpected error.

Related

Ingest from storage with persistDetails = true not save ingest status result

I'm now implement a program to migrate large amount of data to ADX base on Ingest from Storage feature of ADX and I'm need to check that status of each ingestion request each time the request finish but I'm facing an issue
Base on MS document in here
If I set the persistDetails = true for example with the command below it must save the ingestion status but currently this setting seem not work (with or without it)
.ingest async into table MigrateTable
(
h'correct blob url link'
)
with (
jsonMappingReference = 'table_mapping',
format = 'json',
persistDetails = true
)
Above command will return an OperationId and when I using it to check export status when the ingest task finish I always get this error message :
Error An admin command cannot be executed due to an invalid state: State='Operation 'DataIngestPull' does not persist its operation results' clientRequestId: KustoWebV2;
Can someone clarify for me what is the root cause relate to this? With me it seem like a bug relate to ADX
Ingesting data directly against the Data Engine, by running .ingest commands, is usually not recommended, compared to using Queued Ingestion (motivation included in the link). Using Kusto's ingestion client library allows you to track the ingestion status.
Some tools/services already do that for you, and you can consider using them directly. e.g. LightIngest, Azure Data Factory
If you don't follow option 1, you can still look for the state/status of your command using the operation ID you get when using the async keyword, by using .show operations
You can also use the client request ID to filter the result set of .show commands to view the state/status of your command.
If you're interested in looking specifically at failures, .show ingestion failures is also available for you.
The persistDetails option you specified in your .ingest command actually has no effect - as mentioned in the docs:
Not all control commands persist their results, and those that do usually do so by default on asynchronous executions only (using the async keyword). Please search the documentation for the specific command and check if it does (see, for example data export).
============ Update sample code follow suggestion from Yoni ========
Turn out, other member in my team mess up with access right with adx, after fixing it everything work fine
I just have one concern relate to PartiallySucceeded that need clarify from #yoni or someone have better knowledge relate to that
try
{
var ingestProps = new KustoQueuedIngestionProperties(model.DatabaseName, model.IngestTableName)
{
ReportLevel = IngestionReportLevel.FailuresAndSuccesses,
ReportMethod = IngestionReportMethod.Table,
FlushImmediately = true,
JSONMappingReference = model.IngestMappingName,
AdditionalProperties = new Dictionary<string, string>
{
{"jsonMappingReference",$"{model.IngestMappingName}" },
{ "format","json"}
}
};
var sourceId = Guid.NewGuid();
var clientResult = await IngestClient.IngestFromStorageAsync(model.FileBlobUrl, ingestProps, new StorageSourceOptions
{
DeleteSourceOnSuccess = true,
SourceId = sourceId
});
var ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
while (ingestionStatus.Status == Status.Pending)
{
await Task.Delay(WaitingInterval);
ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
}
if (ingestionStatus.Status == Status.Succeeded)
{
return true;
}
LogUtils.TraceError(_logger, $"Error when ingest blob file events, error: {ingestionStatus.ErrorCode.FastGetDescription()}");
return false;
}
catch (Exception e)
{
return false;
}

Most efficient way to run a recursive query in a container

I’m relatively new to Azure Cosmos DB, and I am struggling with how to approach this problem due to some conflicting documentation.
I have a single container, with JSON data.
Each JSON document has a root level array called opcos which can contain N number of GUIDS (typically less than 5).
These opcos GUIDS refer to child items which are ID’s or separate documents.
If a parent document links to a child, then I need to check the child for more children in its opcos node.
Whats the best way to get all the related items, there could be approx. 100 related documents.
I need to keep each document separate, so I can’t store them as sub-documents, as link between parent and child is fluid between multiple parents.
I am looking for a recursive solution, and I am trying to do this from within Cosmos DB, as I am assuming that running potentially 100 calls from outside of Cosmos DB carries a performance overhead with all the connecting etc.
Advice is welcomed, I took a snippet off another article and tried editing it, but it immediately errors onvar context = getContext();
Also, any tips on debugging functions and stored procedures is welcome. I've 15 years of TSQL behind me, but this is very different.
When I tried using a function in Cosmos DB it says
ReferenceError:
'getContext' is not defined
If I try the following code
var context = getContext();
var collection = context.getCollection();
function userDefinedFunction(id){
var context = getContext();
var collection = context.getCollection();
var metadataQuery = 'SELECT company.opcos FROM company where company.id in (' + id + ')';
var metadata = collection.queryDocuments(collection.getSelfLink(), metadataQuery, {}, function (err, documents, options) {
if (err) throw new Error('Error: ', + err.message);
if (!documents || !documents.length) {
throw new Error('Unable to find any documents');
} else {
var response = getContext().getResponse();
/*for (var i = 0; i < documents.length; i++) {
var children = documents[i]['$1'].Children;
if (children.length) {
for (var j = 0; j < children.length; j++) {
var child = children[j];
children[j] = GetWikiChildren(child);
}
}
}*/
response.setBody(documents);
}
});
}
The answer really comes down to your partitioning strategy.
First and foremost your udf doesn't run because UDFs don't have the execution context as part of their API. Your function will run but you need to create it as a stored procedure, not a user defined function.
Now you have to keep in mind that stored procedures can be executed only against a single logical partition and this is their transaction scope. Your technique will work as long as you pass an array of ids in the stored procedure and the documents you're manipulating are in the same partition. If they are not then it's impossible to used a stored proc (well except if you have one per document which probably isn't worth it at this point).
On a side note you want to parameterize the way you add the ids in the query to prevent potential sql injection.

There seem to be no try/catch errors when making a cts.collectionQuery on a nonexistent collection

I have a MarkLogic JavaScript query using the cts.collectionQuery using the Query Console. I'm looking for a way to detect if an invalid (nonexistent) collection is passed to the query. Wrapping the code in a try/catch block doesn't do anything useful and always returns a result which seems to be bad behavior.
In the following snippet, the value for "thisCollection" can be anything and the query will return a value without error.
try {
var thisCollection = "xxxx";
var collQuery = cts.collectionQuery(thisCollection);
var phoneQuery = cts.jsonPropertyValueQuery("phoneNumber", number);
var andQuery = cts.andQuery([collQuery, phoneQuery]);
var thisCount = cts.estimate(andQuery);
resultCount = resultCount + thisCount;
resultString = resultString + "," + thisCount;
} catch(err) {
resultString = "Query Error =" + err.name;
}
My expectation is that passing a nonexistent collection name to a collectionQuery would throw an error of some kind.
It might help to understand that collections are just kind of labels attached to documents. They exist by the mere use of them on documents, and don't need to be pre-declared in any way. That is also why one document can participate in many collections, as opposed to the fact it can be in only one directory (or path).
Best way to detect if a collection 'exists', is by doing a cts.estimate on it:
let collectionExists = cts.estimate(collQuery) > 0;
HTH!

how to discard initial data in a Firebase DB

I'm making a simple app that informs a client that other clients clicked a button. I'm storing the clicks in a Firebase (db) using:
db.push({msg:data});
All clients get notified of other user's clicks with an on, such as
db.on('child_added',function(snapshot) {
var msg = snapshot.val().msg;
});
However, when the page first loads I want to discard any existing data on the stack. My strategy is to call db.once() before I define the db.on('child_added',...) in order to get the initial number of children, and then use that to discard that number of calls to db.on('child_added',...).
Unfortunately, though, all of the calls to db.on('child_added',...) are happening before I'm able to get the initial count, so it fails.
How can I effectively and simply discard the initial data?
For larger data sets, Firebase now offers (as of 2.0) some query methods that can make this simpler.
If we add a timestamp field on each record, we can construct a query that only looks at new values. Consider this contrived data:
{
"messages": {
"$messageid": {
"sender": "kato",
"message": "hello world"
"created": 123456 // Firebase.ServerValue.TIMESTAMP
}
}
}
We could find messages only after "now" using something like this:
var ref = new Firebase('https://<your instance>.firebaseio.com/messages');
var queryRef = ref.orderBy('created').startAt(Firebase.ServerValue.TIMESTAMP);
queryRef.on('child_added', function(snap) {
console.log(snap.val());
});
If I understand your question correctly, it sounds like you only want data that has been added since the user visited the page. In Firebase, the behavior you describe is by design, as the data is always changing and there isn't a notion of "old" data vs "new" data.
However, if you only want to display data added after the page has loaded, try ignoring all events prior until the complete set of children has loaded at least once. For example:
var ignoreItems = true;
var ref = new Firebase('https://<your-Firebase>.firebaseio.com');
ref.on('child_added', function(snapshot) {
if (!ignoreItems) {
var msg = snapshot.val().msg;
// do something here
}
});
ref.once('value', function(snapshot) {
ignoreItems = false;
});
The alternative to this approach would be to write your new items with a priority as well, where the priority is Firebase.ServerValue.TIMESTAMP (the current server time), and then use a .startAt(...) query using the current timestamp. However, this is more complex than the approach described above.

Angularjs multiple $http.get request

I need to do two $http.get call and I need to send returned response data to my service for doing further calculation.
I want to do something like below:
function productCalculationCtrl($scope, $http, MyService){
$scope.calculate = function(query){
$http.get('FIRSTRESTURL', {cache: false}).success(function(data){
$scope.product_list_1 = data;
});
$http.get('SECONDRESTURL', {'cache': false}).success(function(data){
$scope.product_list_2 = data;
});
$scope.results = MyService.doCalculation($scope.product_list_1, $scope.product_list_2);
}
}
In my markup I am calling it like
<button class="btn" ng-click="calculate(query)">Calculate</button>
As $http.get is asynchronous, I am not getting the data when passing in doCalculation method.
Any idea how can I implement multiple $http.get request and work like above implementation to pass both the response data into service?
What you need is $q.all.
Add $q to controller's dependencies, then try:
$scope.product_list_1 = $http.get('FIRSTRESTURL', {cache: false});
$scope.product_list_2 = $http.get('SECONDRESTURL', {'cache': false});
$q.all([$scope.product_list_1, $scope.product_list_2]).then(function(values) {
$scope.results = MyService.doCalculation(values[0], values[1]);
});
There's a simple and hacky way: Call the calculation in both callbacks. The first invocation (whichever comes first) sees incomplete data. It should do nothing but quickly exit. The second invocation sees both product lists and does the job.
I had a similar problem recently so I'm going to post my answer also:
In your case you only have two calculations and it seems to be the case this number is not mutable.
But hey, this could be any case with two or more requests being triggered at once.
So, considering two or more cases, this is how I would implement:
var requests = [];
requests.push($http.get('FIRSTRESTURL', {'cache': false}));
requests.push($http.get('SECONDRESTURL', {'cache': false}));
$q.all(requests).then(function (responses) {
var values = [];
for (var x in responses) {
responses[x].success(function(data){
values.push(data);
});
}
$scope.results = MyService.doCalculation(values);
});
Which, in this case, would force doCalculation to accept an array instead.

Resources