Does aws appsync have scan operations to scan dynamoDB - amazon-dynamodb

I am building a serveless web app with aws amplify - graphql - dynamodb. I want to know what exactly a scan operation is in this context. For example, I have an User table and queries listUsers and getUser were generated from amplify schema. Are they scan operations or queries?
Thank you for your answers in advance as I could only find the definition of a scan operation but there aren't example for me to identify one when it comes to graphql.

Amplify uses Filter Expressions which are a type of Query.
You can see this yourself by looking at the .vtl files that amplify generates and uploads to appsync.
They are located here: amplify/#current-cloud-backend/api/[API NAME]/build/resolvers
In that folder you can open up one of the Query.list[Model].req.vtl. Even if you are not familiar with Velocity Template Language you can still get the idea. You can see that it uses the expression $util.transform.toDynamoDBFilterExpression.
More info about that util and then looking at the docs for toDynamoDBFilterExpression.

Related

Is there a way to add new field to all of the documents in a firestore collection?

I have a collection that needs to be updated. There's a need to add new field and fill it out based on the existing field.
Let's say I have a collection called documents:
documents/{documentId}: {
existingField: ['foo', 'bar'],
myNewField ['foo', 'bar']
}
documents/{anotherDocumentId}: {
existingField: ['baz'],
myNewField ['baz']
}
// ... and so on
I already tried to fire up local cloud function from emulator that loops for each document and writes to production data based on the logic I need. The problem is that function can only live up to max of 30 seconds. What I need would be some kind of console tool that I can run as admin (using service-account) to quickly manage my needs.
How do you handle such cases?
Firebase does not provide a console or tool to do migrations.
You can write a program to run on your development machine that uses the one of the backend SDKs (like the Firebase Admin SDK) to query, iterate, and update the documents and let it run as long as you want.
There is nothing specific built into the API for this type of data migration. You'll have to update each document in turn, which typically involves also reading all documents (or at least their IDs).
While it is possible to do this on Cloud Functions, I find it easier to do it with a local Node.js script, as that doesn't have the runtime limits Cloud Functions imposes.

Bulk delete support in Cosmos DB using .NET SDK

Based on documentation related to the cosmos db bulk executer(https://learn.microsoft.com/en-us/azure/cosmos-db/bulk-executor-dot-net), there is support for a bulk delete via the bulk executer.
However, the examples under the new bulk support within the .NET SDK (https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) does not explicitly state anything about deletion
I wanted to understand if there were any drawbacks to attempting a delete on several documents using the new bulk execution support (here: https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/), or if it is okay to proceed with using a similar pattern as the "Create" flow described in the sample.
When Bulk mode is enabled, any point operation (ReadItem, CreateItem, UpsertItem, DeleteItem, ReplaceItem) will benefit from it, just follow the same pattern of the concurrent Tasks but instead of CreateItem, DeleteItem (or you could even mix different operation types).

Is there a way to run GQL queries without defining entity model in python

I have a java, spring-data app that uses Datastore. I need a subset of this data to run analytics using python app. What I need in python app is essentially a join (yup, relational doesnt get out of me) between two "Kinds" queried by key of one kind.
NDB client requires creating same entity models in python to be able to query data, which is a drag. Why cant i simply run the console version of GQL(select * from kind) using python. Maybe I am missing something as this sort of querying is available in almost all relational and nosql DBs.
Your observations are correct: a GQL query cannot perform a SQL-like "join" query. This is documented on the GQL Reference for Python NDB/DB documentation page.
If you would like to submit a feature request to request its implementation, you can simply open an issue for it in the Public Issue Tracker.

import individual files automated to firestore with the POST: https://firestore.googleapis.com/v1beta1/{database}/documents.commit?

As the title suggested, can I post with the: https://firestore.googleapis.com/v1beta1/{database}/documents.commit command a single JSON file directly in my Firestore database and will they be processed? Added to the collections etc? Or should I go with POST projects.databases.documents.createDocument. I was reading this documentation
I want to put json files from different sources in to my Firestore database to build up my collection.
And where should I put the filename of the json file that I want to upload?
Thanks!!
You can see here [1] the usage of both calls:
documents.commit= Commits a transaction, while optionally updating documents
documents.createDocument= Creates a new document
For using the JSON in the API call you need to send a POST request, check this question [2].
Also, regarding your last comment, you can start collections and add documents using the Firestore UI, but also you can do that using client libraries in different languages (Python, Java, Go...). Here is a list of "How to"s regarding Firestore [3].
In case you think that some features are missing, you can always file a Feature Request following this link [4] (as Firestore is still not there, I would choose Datastore), but keep in mind that Firestore is still in Beta.

cosmosdb - archive data older than n years into cold storage

I researched several places and could not find any direction on what options are there to archive old data from cosmosdb into a cold storage. I see for DynamoDb in AWS it is mentioned that you can move dynamodb data into S3. But not sure what options are for cosmosdb. I understand there is time to live option where the data will be deleted after certain date but I am interested in archiving versus deleting. Any direction would be greatly appreciated. Thanks
I don't think there is a single-click built-in feature in CosmosDB to achieve that.
Still, as you mentioned appreciating any directions, then I suggest you consider DocumentDB Data Migration Tool.
Notes about Data Migration Tool:
you can specify a query to extract only the cold-data (for example, by creation date stored within documents).
supports exporting export to various targets (JSON file, blob
storage, DB, another cosmosDB collection, etc..),
compacts the data in the process - can merge documents into single array document and zip it.
Once you have the configuration set up you can script this
to be triggered automatically using your favorite scheduling tool.
you can easily reverse the source and target to restore the cold data to active store (or to dev, test, backup, etc).
To remove exported data you could use the mentioned TTL feature, but that could cause data loss should your export step fail. I would suggest writing and executing a Stored Procedure to query and delete all exported documents with single call. That SP would not execute automatically but could be included in the automation script and executed only if data was exported successfully first.
See: Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs.
UPDATE:
These days CosmosDB has added Change feed. this really simplifies writing a carbon copy somewhere else.

Resources