I already have a resource created (DynamoDB table) using this
resource "aws_dynamodb_table" "my_dynamo_table" {
name = "my_table"
hash_key = "Id"
}
Now, I would like to enable streams on this table, if I do this by updating the above resource like this
resource "aws_dynamodb_table" "my_dynamo_table" {
name = "my_table"
hash_key = "Id"
stream_enable = true
stream_view_type = "NEW_IMAGE"
}
Terraform plans to delete the table and re-create it. Is there any other way, I can just update the table to enable the stream?
Related
I have a container in Azure Cosmos DB that has multiple document types in the same container. So based on the type, the key pairs change. I'm trying to read the data from this container in Synapse using the following code:
cfg = {
"spark.cosmos.accountEndpoint": Endpoint,
"spark.cosmos.accountKey": accountKey,
"spark.cosmos.database": databaseName,
"spark.cosmos.container": containerName,
}
df = spark.read.format("cosmos.oltp").options(**cfg)\
.option("spark.cosmos.read.inferSchema.enabled","true").load()
However, the schema that I'm getting through this dataframe is of the first row's type. How can I ensure that I read data for one particular type and the schema is inferred accordingly?
If you don't want to use the customQuery config option above (that would actually hard-code the query sent to Cosmos backend vs. relying on filter push-down in Spark) you can also use the config option "spark.cosmos.read.inferSchema.query" - this will just change the query used for schema inferrence - but otherwise rely on filter pushdown (which can result in better query performance in some cases)
cfg = {
"spark.cosmos.accountEndpoint": Endpoint,
"spark.cosmos.accountKey": accountKey,
"spark.cosmos.database": databaseName,
"spark.cosmos.container": containerName,
"spark.cosmos.read.inferSchema.query": "SELECT * FROM c WHERE c.typ = TYPE_NAME"
}
If you have multiple document types in one container, use the spark.cosmos.read.customQuery parameter in the configuration to get a particular document type as a spark dataframe:
cfg = {
"spark.cosmos.accountEndpoint": Endpoint,
"spark.cosmos.accountKey": accountKey,
"spark.cosmos.database": databaseName,
"spark.cosmos.container": containerName,
"spark.cosmos.read.customQuery": "SELECT * FROM c WHERE c.typ = TYPE_NAME"
}
df = spark.read.format("cosmos.oltp").options(**cfg)\
.option("spark.cosmos.read.inferSchema.enabled","true").load()
Refer to this documentation for more information.
for some reason I want to use book.randomID as key in amazon DynamoDB table using java code. when i tried id added a new field in the item named "book.randomID"
List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new KeySchemaElement().withAttributeName("conceptDetailInfo.conceptId").withKeyType(KeyType.HASH)); // Partition
and here is the json structure
{
"_id":"123",
"book":{
"chapters":{
"chapterList":[
{
"_id":"11310674",
"preferred":true,
"name":"1993"
}
],
"count":1
},
"randomID":"1234"
}
}
so is it possible to use such element as key. if yes how can we use it as key
When creating DynamoDB tables AWS limits it to the types String, Binary and Number. Your attribute book.random seems to be a String.
As long as it's not one of the other data types like List, Map or Set you should be fine.
Just going to AWS console and trying it out worked for me:
How can i update multiple records in DynamoDB in single query?
I have a csv file as an input based on the csv file I have to update multiple records(only one attribute) in the DB.
Is there any API available for this?
or This can be done using batch processing(Spring-batch)?
DynamoDB doesn't have batchUpdate API directly. It does have batch get item and batch write item API.
However, you can use batchWriteItem API to update the item.
1) Use the below BatchWriteItemSpec class to construct the request
BatchWriteItemSpec
2) Use the below TableWriteItems class to construct the item that need to be updated
TableWriteItems
3) Use the below addItemToPut method (or withItemsToPut) to add the new attribute
addItemToPut
The batchWriteItem API creates a new item if the item (i.e. partition key) is not available. If the partition key is available, it will update the existing item. Your use case falls under this category.
Code Sample:-
files - is the the table name
fileName - is the partition key
transcriptionnew - Adding the new attribute (or can update the existing attribute value as well)
DynamoDB dynamoDB = new DynamoDB(dynamoDBClient);
Item itemUpdate1 = new Item();
itemUpdate1.withKeyComponent("fileName", "file1")
.withString("transcriptionnew", "new value");
Item itemUpdate2 = new Item();
itemUpdate2.withKeyComponent("fileName", "file2")
.withString("transcriptionnew", "new value");
TableWriteItems tableWriteItems = new TableWriteItems("files").withItemsToPut(itemUpdate1, itemUpdate2);
BatchWriteItemSpec batchWriteItemSpec = new BatchWriteItemSpec().withTableWriteItems(tableWriteItems);
BatchWriteItemOutcome batchWriteItemOutCome = dynamoDB.batchWriteItem(batchWriteItemSpec);
if (batchWriteItemOutCome.getUnprocessedItems().isEmpty()) {
//All items are processed
return true;
}
I'm working around File Exchange (Export) using Data Import Export Framework (DIXF) , i want to add generate method to Find LineAmount Purchline associated with the receiving line VendPackingSlipTrans from PurchLine table.I create the following script but i need a help :
[DMFTargetTransformationAttribute(true),DMFTargetTransformationDescAttribute("Function that generate LineAmount"),
DMFTargetTransformationSequenceAttribute(11),
DMFTargetTransFieldListAttribute([fieldStr(DMFVendPackingSlipTransEntity,LineAmount)])
]
public container GenerateLineAmount(boolean _stagingToTarget = true)
{
container res;
PurchLine purchLine;
VendPackingSlipTrans vendPackingSlipTrans;
if (_stagingToTarget)
{
select firstOnly purchLine
where purchLine.LineAmount == entity.LineAmount &&
vendPackingSlipTrans.OrigPurchid == purchLine.PurchId &&
vendPackingSlipTrans.PurchaseLineLineNumber == purchLine.LineNumber;
if ( ! purchLine )
{
entity.LineAmount = purchLine.LineAmount ;
entity.insert();
}
}
res = [entity.LineAmount];
return res;
}
I have to export data from ax to file using DMF,so for that i have some field existing in VendPackingSlipTrans so added this fields in staging table but others field exist in other table like LineAmount.I don't know how to add this others fields in staging table. for that in myEnityclass i create generat method to associate field in source table. to staging table
So it seems you want to export VendPackingSlipTrans records with additional information from PurchLine records using a custom entity of the data import/export-Framework (DIXF). If that is correct, there are several problems in your implementation:
logic in if (_stagingToTarget) branch: since the framework can be used for both import and export, _stagingToTarget is used to distinguish between the two. If _stagingToTarget is true, data is imported from the staging table to the Dynamics AX target table. So you need to put the logic in the else branch.
selection of PurchLine record: the current implementation will never select a PurchLine record because values of an uninstantiated VendPackingSlipTrans table variable are used as criteria in the select statement. Also the chosen criteria are wrong, take a look at method purchLine of table VendPackingSlipTrans to see how to get the PurchLine record for a VendPackingSlipTrans record and use the target variable to instantiate the VendPackingSlipTrans table variable.
check if (! purchLine): This check means that if NO PurchLine record could be found with the previous select statement, the LineAmount of this empty record will be used for the staging record. This is wrong, instead you want to use the LineAmount of a record that has been found.
entity.insert(): as I mentioned in the comments, the entity record should not be inserted in a generate method; the framework will take care of the insert
A possible fix of these problems could look like this:
[
DMFTargetTransformationAttribute(true),
DMFTargetTransformationDescAttribute('function that determines LineAmount for export'),
DMFTargetTransformationSequenceAttribute(11),
DMFTargetTransFieldListAttribute([fieldStr(DMFVendPackingSlipTransEntity, LineAmount)])
]
public container GenerateLineAmount(boolean _stagingToTarget = true)
{
container res;
PurchLine purchLine;
VendPackingSlipTrans vendPackingSlipTrans;
if (_stagingToTarget)
{
// this will be executed during import
res = [0.0];
}
else
{
// this will be executed during export
// the target variable contains the VendPackingSlipTrans that is exported
vendPackingSlipTrans = target;
purchLine = vendPackingSlipTrans.purchLine();
if (purchLine)
{
res = [purchLine.LineAmount];
}
}
return res;
}
I am using ydn.db from local storage. Which function can be used to remove the store using YDB.DB library ? The stores i have added is as follows
var schema = {
stores: [{
name:'consignments',
keyPath:"timeStamp"
}];
var db = new ydn.db.Storage('localhost', schema);
I wish to check whether the store exist in localstorage and if it exists delete the store and if not exist add the store.
If store exist or not can be known only after opening the database. If a schema does not a store name, but exit in the connected database, the existing store will be removed.
To delete the database
var db = new ydn.db.Storage('localhost', schema);
To delete above the database,
ydn.db.deleteDatabase(db.getName(), db.getType());
If type of database is not known
ydn.db.deleteDatabase('localhost');
I use the deleteDatabase function like this:
indexedDB.deleteDatabase(storage_name);
see the doc here: https://dev.yathit.com/api/ydn/db/index.html