I'm looking for an example to delete items from Google Cloud Datastore by:
Key
Kind
Filter
Ancestry
ps: I couldn't find them here:
https://developers.google.com/datastore/docs/concepts/queries
Google Cloud Datastore only supports delete-by-key (and in general, does not support "update queries").
To delete a small number of entities, you can execute a (keys-only) RunQuery operation to fetch the keys and then a BlindWrite request to delete the entities.
Or, if the entities are in a single entity group, you can do the entire operation inside a transaction using BeginTransaction to create a new transaction, set the transaction handle in the query ReadOptions, and a Commit request to apply the mutation.
If you are deleting a large number of entities, you can use the above technique in a MapReduce.
If you are deleting all entities of a particular kind, you can use the App Engine Administration Console to delete entities in bulk.
Related
I have been developing apps that use Firestore as a primary data store and Typesense for full-text search. Some collections are completely duplicated and synced between the databases.
Is there a reason why I should not use Typesense as the SOLE data collection and avoid keeping a collection of the data in Firestore?
What are the downsides of using a search engine as a primary datastore? Expense? Scalability?
Is there a reason why I should not use Typesense as the SOLE data collection and avoid keeping a collection of the data in Firestore?
Security, scalability, offline data persistence, and interoperability with all the other Firebase products. Besides that, I think that the following resource might also help:
How to filter Firestore data cheaper?
That depend of your business and your process to input datas in your system.
If you need a specific backend, firebase cloud-function coupled with algolia, is a good option. with npm algoliasearch
As database with firebase no cost, (only read and write does),so is not a big deal to keep data in Firebase and perform search engine and filters UI with algolia
I would like to verify if my understanding is correct about OperationType in Cosmos DB (Cassandra API), as I cannot find a good explanation in the documentation.
Basically I have run a few different cases on Cosmos DB, and I see that when I query data using partition key, then only ReadFeed is used. But when I am not using partition key, then OperationType "Query" is used. It means that apparently in the first case it doesn't use the query engine and goes directly to the storage and in the second case Query Engine is used. Does it sound right?
Read is when you Get a Document
The Get Document operation retrieves a document by its partition key and document key.
ReadFeed is when you List Documents
Performing a GET on the documents resource of a particular collection, i.e. the docs URI path, returns a list of documents under the collection. ReadFeed can be used to retrieve all documents, or just the incremental changes to documents within the collection.
Query is when you Query Documents (Search)
You can query arbitrary json documents in a collection by performing a post against the “colls” resource in Cosmos DB.
ReadFeed or change feed is an internal feature of Cosmos DB. Other DBs maintain a feed of changes as well. They may expose it and you can interact with it in the ways they make available. They will use the feed of changes internally for many things. Cosmos DB are probably able to satisfy your query when you filter by partition using the ReedFeed.
in the project i am working on, we have a database per tenant and each tenant consists of at least 1 department. One of the requirements we have is that when an admin user deletes a department using a custom frontend we've provided, the system should first archive the data of that department on a blob storage before the data is deleted. The same we have for the tenant, we need to archive the data before the database of that tenant is removed from the account.
Now, my question: is there any best practice to do this? We are planning to retrieve all the data from all collections, using a mongo query, based on the department id (which is also the partition key) and then send it to a blob storage. The challenge we have is the execution of the query to retrieve all the data because it can be a huge amount and the RUs required for that action may affect the performance of the system because other users may be using the system while we remove the data.
I looked at mongodump and mongoexport but these are applications so we cannot execute it from our code?
Any ideas? Thanks a lot.
I think one way to solve this is by using ChangeFeed, as it reallyhelps and simplifies writing a carbon copy somewhere else.
However, as of now the change feed processor won't notify you for deleted documents so you can't listen for them, this feature is planned as of now.
Your best bet is to write some custom application that does archiving using Query language support
One colleague said that cosmos db will stop supporting collections without a partition key. But I can't find any information about this statement from Microsoft.
The application I'm working on has a collection of order records. A typical query returns 10s of thousands of these records. So if I use order id as partition key, it'll always run cross partition queries.... And the requirement is to get all records across all tenants, so partition by tenant id isn't an option, either.
I thought it'll be fine just create a collection without a partition key. I'll worry about archiving data later (probably with azure functions and change feed).
Is it a good idea to do so?
One colleague said that cosmos db will stop supporting collections
without a partition key. But I can't find any information about this
statement from Microsoft.
Based on the tips on the cosmos db portal,this message is confined to portal only so far.
You still could create non-partitioned collection by using sdk:
DocumentCollection collection = new DocumentCollection();
collection.set("id","jay");
ResourceResponse<DocumentCollection> createColl = client.createCollection("dbs/db",collection,null);
So,i think your service will not be affected by now. As for future trends, I suggest you pay more attention to Microsoft's official statement. If you have any special needs, you can submit feedback for help.
I have a question in regards to the capabilities of Firebase in equivalence to MySQL features like:
Events
Triggers
Stored Procedures
In my case I want to migrate off of MySQL to Firebase but I need to know if the usecase can be replicated in Firebase.
My current MySQL DB I have a table that has a column called status which once it gets changed to 'Final' it triggers the execution of a stored procedure to do a calculation on an entire table.
So in other words I would have to be able to add a 'trigger' on the actual firebase data to then perform a 'stored procedure' to calculate something; is this possible with Firebase?!
You now have the capability to use Cloud Functions for Firebase to write code that triggers in response to changes in your Firebase project. There is lots of sample code that illustrates how to respond to writes to a particular location in your database, among many other types of events.