Is it possible to validate a GQL query in cloud datastore before submitting it? Any examples preferably using the Java lib - google-cloud-datastore

From the google code examples:
public QueryResults<?> newQuery(String kind) {
// [START newQuery]
String gqlQuery = "select * from " + kind;
Query<?> query = Query.newGqlQueryBuilder(gqlQuery).build();
QueryResults<?> results = datastore.run(query);
// Use results
// [END newQuery]
return results;
}
Is it possible to validate the gqlQuery before officially running the query?

It looks like you can't. Your best option would be to test your queries with the LocalDatastoreHelper class, like it's done in here. Check how it's set up and the assertValidKey and the GQL methods there.
Again, this is not actual validation of the query, but it seems like the best shot. Also, upon failures, the exception thrown would be DatastoreException, so, you can try and catch said exception in your code.

Related

Is it a good / working practice to use Firebase's documentReference.get(GetOptions(source: cache)) in Flutter?

My issue was, that with the default GetOptions (omitting the parameter), a request like the following could load seconds if not minutes if the client is offline:
await docRef.get()...
If I check if the client is offline and in this case purposefully change the Source to Source.cache, I have performance that is at least as good, if not better, than if the client was online.
Source _source = Source.serverAndCache;
try {
final result = await InternetAddress.lookup('example.com');
if (result.isNotEmpty && result[0].rawAddress.isNotEmpty) {
_source = Source.serverAndCache;
}
} on SocketException catch (_) {
_source = Source.cache;
}
and then use this variable in the following way:
docRef.get(GetOptions(source: _source))
.then(...
This code works perfectly for me now, but I am not sure, if there are any cases in which using the code like this could raise issues.
Also it seems like a lot of boilerplate code (I refactored it into a function so I can use it in any Database methods but still...)
If there are no issues with this, why wouldn't this be the Firebase default, since after trying the server for an unpredictably long time it switches to cache anyways.

DynamoDB PartiQL pagination using SDK

I'm currently working on pagination in DynamoDB using the JS AWS-SDK's executeStatement using PartiQL, but my returned object does not contain a NextToken (only the Items array), which is used to paginate.
This is what the code looks like (pretty simple):
const statement = `SELECT "user", "id" FROM "TABLE-X" WHERE "activity" = 'XXXX'`;
const params = {Statement: statement};
try {
const posted = await dynamodb.executeStatement(params).promise();
return { posted: posted };
} catch(err) {
throw new Error(err);
}
I was wondering if anyone has dealt with pagination using PartiQL for DynamoDB.
Could this be because my partition key is a string type?
Still trying to figure it out.
Thanks, in advance!
It turns out that if you want a NextToken DO NOT use version 2 of the AWS SDK for JavaScript. Use version 3. Version 3 will always return a NextToken, even if it is undefined.
From there you can figure out your limits, etc (default limit until you actually get a NextToken is 1MB). You'll need to look into the dynamodb v3 execute statement method.
You can also look into dynamodb paginators, which I've never used, but plan on studying.

SuiteScript N/query SuiteQL how to select one transaction like Estimate

I try some demo SuiteQL in NetSuite by following the demo that the SuiteScript 2.0API provides, but the demo seemed too little for me, I still cannot figure out the right way to use it properly, and I had to come back to module N/search.
so I wanna ask for some demo about SuiteQL, especially for Transaction.
Thank you!
Here is an example how you could use the query module to achieve your goal. In this example you would pass whatever type of transaction you wanted to query by using the function queryTransactionsFilteredByStatus defined below and pass to it whatever status you would like. This could obviously be expanded on to fit your use case more specifically.
define(['N/query'], (query) => {
const queryTransactionsFilteredByStatus = (status) => {
const sql = `select * from transaction as t where t.status = ?`;
return query.runSuiteQL({
query: sql,
params: [status]
}).asMappedResults();
}
// The rest of your code here...
}

Ingest from storage with persistDetails = true not save ingest status result

I'm now implement a program to migrate large amount of data to ADX base on Ingest from Storage feature of ADX and I'm need to check that status of each ingestion request each time the request finish but I'm facing an issue
Base on MS document in here
If I set the persistDetails = true for example with the command below it must save the ingestion status but currently this setting seem not work (with or without it)
.ingest async into table MigrateTable
(
h'correct blob url link'
)
with (
jsonMappingReference = 'table_mapping',
format = 'json',
persistDetails = true
)
Above command will return an OperationId and when I using it to check export status when the ingest task finish I always get this error message :
Error An admin command cannot be executed due to an invalid state: State='Operation 'DataIngestPull' does not persist its operation results' clientRequestId: KustoWebV2;
Can someone clarify for me what is the root cause relate to this? With me it seem like a bug relate to ADX
Ingesting data directly against the Data Engine, by running .ingest commands, is usually not recommended, compared to using Queued Ingestion (motivation included in the link). Using Kusto's ingestion client library allows you to track the ingestion status.
Some tools/services already do that for you, and you can consider using them directly. e.g. LightIngest, Azure Data Factory
If you don't follow option 1, you can still look for the state/status of your command using the operation ID you get when using the async keyword, by using .show operations
You can also use the client request ID to filter the result set of .show commands to view the state/status of your command.
If you're interested in looking specifically at failures, .show ingestion failures is also available for you.
The persistDetails option you specified in your .ingest command actually has no effect - as mentioned in the docs:
Not all control commands persist their results, and those that do usually do so by default on asynchronous executions only (using the async keyword). Please search the documentation for the specific command and check if it does (see, for example data export).
============ Update sample code follow suggestion from Yoni ========
Turn out, other member in my team mess up with access right with adx, after fixing it everything work fine
I just have one concern relate to PartiallySucceeded that need clarify from #yoni or someone have better knowledge relate to that
try
{
var ingestProps = new KustoQueuedIngestionProperties(model.DatabaseName, model.IngestTableName)
{
ReportLevel = IngestionReportLevel.FailuresAndSuccesses,
ReportMethod = IngestionReportMethod.Table,
FlushImmediately = true,
JSONMappingReference = model.IngestMappingName,
AdditionalProperties = new Dictionary<string, string>
{
{"jsonMappingReference",$"{model.IngestMappingName}" },
{ "format","json"}
}
};
var sourceId = Guid.NewGuid();
var clientResult = await IngestClient.IngestFromStorageAsync(model.FileBlobUrl, ingestProps, new StorageSourceOptions
{
DeleteSourceOnSuccess = true,
SourceId = sourceId
});
var ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
while (ingestionStatus.Status == Status.Pending)
{
await Task.Delay(WaitingInterval);
ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
}
if (ingestionStatus.Status == Status.Succeeded)
{
return true;
}
LogUtils.TraceError(_logger, $"Error when ingest blob file events, error: {ingestionStatus.ErrorCode.FastGetDescription()}");
return false;
}
catch (Exception e)
{
return false;
}

If one of the multiple adds in a saveChangesAsync fails do the others get added?

I have this function in my application. If the insert of Phrase fails then can someone tell me if the Audit entry still gets added? If that's the case then is there a way that I can package these into a single transaction that could be rolled back.
Also if it fails can I catch this and then still have the procedure exit with an exception?
[Route("Post")]
[ValidateModel]
public async Task<IHttpActionResult> Post([FromBody]Phrase phrase)
{
phrase.StatusId = (int)EStatus.Saved;
UpdateHepburn(phrase);
db.Phrases.Add(phrase);
var audit = new Audit()
{
Entity = (int)EEntity.Phrase,
Action = (int)EAudit.Insert,
Note = phrase.English,
UserId = userId,
Date = DateTime.UtcNow,
Id = phrase.PhraseId
};
db.Audits.Add(audit);
await db.SaveChangesAsync();
return Ok(phrase);
}
I have this function in my application. If the insert of Phrase fails
then can someone tell me if the Audit entry still gets added?
You have written your code in a correct way by calling await db.SaveChangesAsync(); only one time after doing all your modifications on the DbContext.
The answer to your question is: No, the Audit will not be added if Phrase fails.
Because you are calling await db.SaveChangesAsync(); after doing all your things with your entities, Entity Framework wil generate all the required SQL Queries and put them in a single SQL transaction which makes the whole queries as an atomic operation to your database. If one of the generated query e.g. Auditgenerated query failed then the transaction will be rolled back. So every modification that have been done to your database will be removed and so Entity Framework will let your database in a coherent state.

Resources