Does Datastore support bulk updates? - google-cloud-datastore

I've combed through GC DataStore documentation, but I haven't found anything useful around bulk updating documents. Does anyone have best practice around bulk updates in DataStore? Is that even a valid use case for DataStore? (I'm thinking something along the lines of MongoDB Bulk.find.update or db.Collection.bulkWrite)

You can use the projects.commit API call to write multiple changes at once. Most client libraries have a way to do this, for example in python use put_multi.
In C# the DatastoreDb.Insert method has an overload for adding multiple entities at once.
If this is a one-off, consider using gcloud's import function.

Related

How can I delete selected documents in firebase Firestore

I have a lot of files in my Firestore that I want to delete through the firebase website but I want to be able to select and delete them instead of deleting them one by one
It could be a hard task to delete many documents on your Firestore instance manually. Specially if you have hundreds or thousands of them. I think the best option would be doing it programmatically through an Application or a Cloud function.
On this documentation you can learn how to implement the delete() method on many popular languages to delete documents, fields, and collections, and on this document I found a simple way of creating a Cloud Function while learning how to delete collections.
As a summary there is no easy way to delete many documents,field or collections using the user interface. The best way is to implement code to do it.

Firebase - Admin.firestore vs functions.firestore

I'm newbie on cloud function. I have some confusions.
Is there any difference between admin.firestore and
functions.firestore?
Is admin.database for real-time database?
So if cloud functions are basically written as JavaScript or
TypeScript in Node.js environment, then I can write them with
JavaScript. But documentation made me confused about that. Because
whenever I read the documentation, it gives things different. For
example,
Above code to get a document, it uses document('some/doc').
But, above code it uses doc('doc') to achieve the same functionality. If all of them come form firestore, why do both differ each other?
Can someone help me to understand these questions? Thank you for all your supports.
functions.firestore: is used to set up a trigger. You can set up a listener on a document path so that whenever an event happens on that document (create/update/delete etc.), this function will be executed. See how to create one here.
admin.firestore: is used to create a reference to a firestore database. You can perform various operations on firestore collection/documents using admin sdk through that reference. See documentation.

Is there a way to know if gremlin query is read query or write query

I am trying to add basic read/write authorization in gremlin-server, I want to know if there is a way by which I can identify if this query is read-only query or write query.
There is no API call you can make to determine that, but you can get inspiration for how to detect it from ReadOnlyStrategy here. The key is to cycle the Traversal object and look for a Step that implements the Mutating interface. If you find one of those in there, you could classify the traversal as a write query.
Of course, for Gremlin, classifying a query and read or write isn't so binary as it could easily be a mix of read and write. It's also possible that at runtime the write might never execute depending on the flow of the traversal, so it could be "runtime readonly". Hopefully, detecting the Mutating interface is a good-enough solution for you.
I'm not sure where you intend to implement this authorization function but I sense it would be best done as a TraversalStrategy that would then fire on traversal execution. I don't know if that's too late for your authorization process, but it would be the easiest way I can envision. The problem is that if you are accepting scripts then with that approach you could get a partial execution of that script up to the point where authorization was not allowed. If you needed to disallow an entire script based on one write traversal then you might need to look at a custom sandbox. Of course, it is better to avoid scripts altogether and simply use bytecode based requests only. If you are only concerned with bytecode then TraversalStrategy should work pretty well for the authorization use case.

Is there a way to validate the syntax of a Salesforce.com's SOQL query without executing it?

I'm writing an API that converts actions performed by a non-technical user into Salesforce.com SOQL 'SELECT', 'UPSERT', and 'DELETE' statements. Is there any resource, library, etc. out there that could validate the syntax of the generated SOQL? I'm the only one at my company with any experience with SOQL, so I'd love to place it into a set of automated tests so that other developers enhancing (or fixing) the SOQL generation algorithm know if it's still functioning properly.
I know one solution here is to just make these integration tests. However, I'd rather avoid that for three reasons:
I'd need to maintain another Salesforce.com account just for tests so we don't go over our API request cap.
We'll end up chasing false positives whenever there are connectivity issues with Salesforce.com.
Those other developers without experience will potentially need to figure out how to clean up the test Salesforce.com instance after DML operation test failures (which really means I'll need to clean up the instance whenever this occurs).
You might solve your problem by using the SoqlBuilder library. It generates SOQL for you and is capable of producing SOQL statements that would be quite error prone to create manually. The syntax is straight forward and I've used it extensively with very few issues.
I found another way to do this.
Salesforce.com posted their SOQL notation in Backus-Noir Form (BNF) here:
http://www.salesforce.com/us/developer/docs/api90/Content/sforce_api_calls_soql_bnf_notation.htm
This means you can use a BNF-aware language recognition tool to parse the SOQL. One of the most common tools, ANTLR, does this and is free. Following the ANTLR example, pass the SOQL grammar into its grammar compiler to get a Lexer and a Parser in your desired language (C#, Java, Python, etc.). Then you can pass the actual SOQL statements you want to validate into the Lexer, and then your Lexer tokens into your Parser, to break apart the SOQL statements. If your Lexer or Parser fails, you have invalid SOQL.
I can't think of a way to do this from outside of Salesforce (and even in Apex I've only got one idea right now that may not work), but I can think of two suggestions that may be of help:
Validate queries by running them, but do them in batches using a custom web service. i.e. write a web service in Apex that can accept up to 100 query strings at once, have it run them and return the results. This would drastically reduce the number of API calls but of course it won't work if you're expecting a trial-and-error type setup in the UI.
Use the metadata API to pull down information on all objects and their fields, and use those to validate that at least the fields in the query are correct. Validating other query syntax should be relatively straight forward, though conditionals may get a little tricky.
You can make use of the salesforce develop nuget packages that leverages SOAP API

Accessing CoreData tables from fmdb

I'm using CoreData in my application for DML statements and everything is fine with it.
However I don't want use NSFetchedResultsController for simple queries like getting count of rows, etc.
I've decided to use fmdb, but don't know actual table names to write sql. Entity and table names don't match.
I've even looked inside .sqllite file with TextEdit but no hope :)
FMResultSet *rs = [db getSchema] doesn't return any rows
Maybe there's a better solution to my problem?
Thanks in advance
Core Data prefixes all its SQL names with Z_. Use the SQL command line tools to check out the your persistent store file to see what names it uses.
However, this is a very complicated and fragile solution. The Core Data schema is undocumented and changes without warning because Core Data does not support direct SQL access. You are likely to make error access the store file directly and your solution may break at random when the API is next updated.
The Core Data API provides the functionality you are seeking. IJust use a fetch request that fetches on a specific value using an NSExpressionDescription to perform a function. This allows you to get information like counts, minimums, maximums etc. You can create and use such fetches independent of a NSFetchedResultsController.
The Core Data API is very feature rich. If you find yourself looking outside the API for a data solution, chances are you've missed something in the API.

Resources