Azure CosmosDB Java Delete All items in a Container - azure-cosmosdb

I'm trying to delete all items in a CosmosDB container -- I cannot delete the container. I cannot find an efficient way to do this.
I have a related problem where I am trying to select all/read all so I can get their id in order to then call a delete on them all...bearing in mind this is probably incredibly inefficient. Modifying the published example very slightly i am trying to select all documents as below:
private void queryDocuments() throws Exception {
logger.info("Query documents in the container " + containerName + ".");
String sql = "SELECT * FROM c";
CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new
CosmosQueryRequestOptions(), Family.class);
while (filteredFamilies.iterator().hasNext()) {
Family family = filteredFamilies.iterator().next();
logger.info("First query result: Family with (/id, partition key) =
(%s,%s)",family.getId(),family.getLastName());
}
logger.info("Done.");
}
The code above only retrieves 1 document and the while loop iterates forever printing out the same document. I have 2 questions:
How can i fix the above query to operate like a typical SELECT ALL
Is there a better way to delete all items in a container via Java SDK without deleting the container?

You can try setting TTL for your container to 10secs or something. If no new record is created or updated this will automatically delete all items.
https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/time-to-live

Related

How to migrate dynamodb data on major table change?

During development structures and requirements change. Key and index settings need to be changed, that might break incremental table update. So my solution so far is to delete the table and recreate it from the cloudformation stack.
But how to solve this problem with a production deployment? Is it possible to automate dynamodb deployment as follows?
Create new table
Migrate data from old table to new table
Delete old table
Yes, it is perfectly possible to automate such a deployment structure. As long as you have code to create a table, it should be fairly straightforward to get all of the data from an old table, change the data, and then upload it all to a new table without any drops in up-time. If you write what language you would like to do such a thing in I can help a bit more.
I've done this before and I've added below a small generified code-sample on how you could do this in Java.
Java method for creating a table given the class of the object type stored in dynamo:
/**
* Creates a single table with its appropriate configuration (CreateTableRequest)
*/
public void createTable(Class tableClass) {
DynamoDBMapper mapper = createMapper(); // you'll need your own function to do this.
ProvisionedThroughput pt = new ProvisionedThroughput(1L, 1L);
CreateTableRequest ctr = mapper.generateCreateTableRequest(tableClass);
ctr.withProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
// Provision throughput and configure projection for secondary indices.
if (ctr.getGlobalSecondaryIndexes() != null) {
for (GlobalSecondaryIndex idx : ctr.getGlobalSecondaryIndexes()) {
if (idx != null) {
idx.withProvisionedThroughput(pt).withProjection(new Projection().withProjectionType("ALL"));
}
}
}
TableUtils.createTableIfNotExists(client, ctr);
}
Java method to delete table:
private static void deleteTable(String tableName) {
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable(tableName);
try {
System.out.println("Issuing DeleteTable request for " + tableName);
table.delete();
System.out.println("Waiting for " + tableName + " to be deleted...this may take a while...");
table.waitForDelete();
}
catch (Exception e) {
System.err.println("DeleteTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
I would scan the whole table and plop all of the content into a List and then map through that list, converting the objects into your new type, and then create a new table of that type but with a different name, push all of your new objects, and then delete the old table after switching any references you might have of the old table to the new one. Unfortunately this does mean that everything consuming your tables are going to need to be able to switch between your two staging tables.

qt multiple QSqlTableModels edited together in one transaction

I have a window in a Qt application using PostgreSQL 9.3 database. The window is a form used do display, edit and insert new data. t looks like that:
I have data from 3 sql tables in that view. the tables are related with foreign keys:
contractors (main table) - mapped to "personal data" section
contacts (has foreign key to contractors.ID)
addresses (has foreign key to contractors.ID)
So - in my window's class I have 3 main models (+ 2 proxy models to transpose tables in "personal data" an "address data" sections). I use QSqlTableModel for theese sesctions, and a QSqlRelationalTableModel for contactData section. when opening that window "normally" (to view some contractor), i simply pass contractor's ID to the constructor and store it in proper variable. Also, I call the QSqlTableModel::​setFilter(const QString & filter) method for each of the models, and set the proper filtering. When opening that window in "add new" mode i simply pass a "-1" or "0" value to the ID variable, so no data gets loaded to the model.
All 3 models have QSqlTableModel::OnManualSubmit editStrategy. When saving the data (triggered by clicking a proper button), I start a transaction. And then I submit models one-by-one. personalData model gets submitted first, as I need to obtain it's PK after insert (to set in the FK fields in other models).
When submitting of the model fails, I show a messageBox with the QSqlError content, rollback the transaction and return from the method.
When I have an error on the first model being processed - no problem, as nothing was inserted. But when the first model is saved, but the second or third fails - there is a little problem. So I rollback the transacion as before, and return from the function. But after correcting the data and submitting it again - the first model is not trying to submit - as it doesn't know that there was a rollback, and the data needs to be inserted again. What would be a good way to notice such a model, that it needs to be submited once again?
At the moment I ended up with something like that:
void kontrahenciSubWin::on_btnContractorAdd_clicked() {
//QStringList errorList; // when error occurs in one model - whole transacion gets broken, so no need for a list
QString error;
QSqlDatabase db = QSqlDatabase::database();
//backup the data - in case something fails and we have to rollback the transaction
QSqlRecord personalDataModelrec = personalDataModel->record(0); // always one row. will get erased by SubmitAll, as no filter is set, because I don't have its ID.
QList<QSqlRecord> contactDataModelRecList;
for (int i = 0 ; i< contactDataModel->rowCount(); i++) {
contactDataModelRecList.append( contactDataModel->record(i) );
}
QList<QSqlRecord> addressDataModelRecList;
for (int i = 0 ; i< addressDataModel->rowCount(); i++) {
addressDataModelRecList.append( addressDataModel->record(i) );
}
db.transaction();
if ( personalDataModel->isDirty() && error.isEmpty() ) {
if (!personalDataModel->submitAll()) //submitAll calls select() on the model, which destroys the data as the filter is invalid ("where ID = -1")
//errorList.append( personalDataModel->lastError().databaseText() );
error = personalDataModel->lastError().databaseText();
else {
kontrahentid = personalDataModel->query().lastInsertId().toInt(); //only here can I fetch ID
setFilter(ALL); //and pass it to the models
}
}
if ( contactDataModel->isDirty() && error.isEmpty() )
if (!contactDataModel->submitAll()) //slot on_contactDataModel_beforeInsert() sets FK field
//errorList.append( contactDataModel->lastError().databaseText() );
error = contactDataModel->lastError().databaseText();
if ( addressDataModel->isDirty() && error.isEmpty() )
if (!addressDataModel->submitAll()) //slot on_addressDataModel_beforeInsert() sets FK field
//errorList.append( addressDataModel->lastError().databaseText() );
error = addressDataModel->lastError().databaseText();
//if (!errorList.isEmpty()) {
// QMessageBox::critical(this, tr("Data was not saved!"), tr("The following errors occured:") + " \n" + errorList.join("\n"));
if (!error.isEmpty()) {
QMessageBox::critical(this, tr("Data was not saved!"), tr("The following errors occured:") + " \n" + error);
db.rollback();
personalDataModel->clear();
contactDataModel->clear();
addressDataModel->clear();
initModel(ALL); //re-init models: set table and so on.
//re-add data to the models - backup comes handy
personalDataModel->insertRecord(-1, personalDataModelrec);
for (QList<QSqlRecord>::iterator it = contactDataModelRecList.begin(); it != contactDataModelRecList.end(); it++) {
contactDataModel->insertRecord(-1, *it);
}
for (QList<QSqlRecord>::iterator it = addressDataModelRecList.begin(); it != addressDataModelRecList.end(); it++) {
addressDataModel->insertRecord(-1, *it);
}
return;
}
db.commit();
isInEditMode = false;
handleGUIOnEditModeChange();
}
Does anyone have a better idea? I doubt if it's possible to ommit backing-up the records before trying to insert them. But maybe there is a better way to "re-add" them to the model? I tried to use "setRecord", and "remoweRows" & "insertRecord" combo, but no luck. Resetting the whole model seems easiest (I only need to re-init it, as it loses table, filter, sorting and everything else when cleared)
I suggest you to use a function written in the language PLPGSQL. It has one transaction between BEGIN and END. If it goes wrong at a certain point of the code then will it rollback all data flawlessly.
What you are doing now is not a good design, because you handle the control over a certain functionality (rollback) to an external system with regard to the rollback (it is happening in the database). The external system is not designed to do that, while the database on the contrairy is created and designed for dealing with rollbacks and transactions. It is very good at it. Rebuilding and reinventing this functionality, which is quite complex, outside the database is asking for a lot of trouble. You will never get the same flawless rollback handling as you will have using functions within the database.
Let each system do what it can do best.
I have met your problem before and had the same line of thought to work this problem out using Hibernate in my case. Until I stepped back from my efforts and re-evaluated the situation.
There are three teams working on the rollback mechanism of a database:
1. the men and women who are writing the source code of the database itself,
2. the men and women who are writing the Hibernate code, and
3. me.
The first team is dedicated to the creation of a good rollback mechanism. If they fail, they have a bad product. They succeeded. The second team is dedicated to the creation of a good rollback mechanism. Their product is not failing when it is not working in very complex situations.
The last team, me, is not dedicated to this problem. Who am I to write a better solution then the people of team 2 or team 1 based on the work of team 2 who were not able to get it to the level of team 1?
That is when I decided to use database functions instead.

SQL Lite Xamarin : Query

I'm newbie in SQLite.
I would like to query my SQLite database to get multiple rows.
When I add a new item in my local database I call this method Add:
public bool Add<T>(string key, T value)
{
return this.Insert(new SQliteCacheTable(key, this.GetBytes(value))) == 1;
}
_simpleCache.Add("favorite_1", data1);
_simpleCache.Add("favorite_2", data2);
_simpleCache.Add("favorite_3", data2);
Then,
I would like to retrieve from local database all entries where key starts with "favorite_"
to returns all objects in the database which are "favorite" objects.
I'm experienced in Linq, and I would like to do something like this:
IEnumerable<Element> = repository.Find((element) => element.Key.StartWith("favorite_"))
In the SQLiteConnection class there is a method like this:
SQLite.Net.SQLiteConnection.Find<T>(System.Linq.Expressions.Expression<System.Func<T,bool>>)
But I would like the same with in returns a collection IEnumerable<T>.
Can you help me please?
Thank you.
Jool
You have to build your query on the table itself, not the connection:
Assuming:
SQLiteConnection repository;
Then the code would look like:
var favorites = repository.Table<SQliteCacheTable>().Where(item => item.StartsWith("favorite_"));
The favorites variable is of type TableQuery<SQliteCacheTable> though, so it does not yet contain your data. The execution of the actual SQL query will be deferred until you try to access the results (by enumerating with foreach or converting to a list with ToList, for example).
To actually observe what's going on on the database, you can turn on tracing in sqlite-net, by setting repository.Trace = true on your SQLiteConnection object.
Finally, it's worth mentioning that you can also use the C# query syntax on TableQuery<T> objects, if your comfortable with it. So your query could become:
var favorites = from item in repository.Table<SQliteCacheTable>()
where item.StartsWith("favorite_")
select item;

Query to fetch table names from AX takes too long

I am using the following code in X++ to get table names:
client server public static container tableNames()
{
tableId tableId;
int tablecounter;
Dictionary dict = new Dictionary();
container tableNamesList;
for (tablecounter=1; tablecounter<=dict.tableCnt(); tablecounter++)
{
tableId = dict.tableCnt2Id(tablecounter);
tableNamesList = conIns(tableNamesList,1,dict.tableName(tableId));
}
return tableNamesList;
}
Business connector code :
tablesList = (AxaptaContainer)Global.ax.
CallStaticClassMethod("Code_Generator", "tableNames");
for (int i = 1; i <= tablesList.Count; i++)
{
tableName = tablesList.get_Item(i).ToString();
tables.Add(tableName);
}
The application hangs for 2 - 3 minutes while fetching data. What could be the cause? Any optimizations?
Rather than use ConIns, use +=, it will be faster
tableNamesList += dict.tableName(tableId);
ConIns has to work out where in the container to place the insert. += just adds it to the end
As mentioned before avoid conIns() when appending elements to a container because it makes a new copy of the container. Use += instead to append in place.
Also, you may want to check for permissions and leave out temporary tables, table maps, and other special cases. Standard Ax has a method to build a table name lookup form that takes these things into account. Check the method Global::pickTable() for details.
You could avoid some calls through the business connector as well and build the entire list in Ax in a similar way and return that in a single function call.
If you are using Dynamics Ax 2012, you could skip the treeNode stuff and use the SysModelElement table to fetch the data and return it immediately as a .Net Array to easy up things on the other side.
public static System.Collections.ArrayList FetchTableNames_ModelElementTables()
{
SysModelElement element;
SysModelElementType elementType;
System.Collections.ArrayList tableNames = new System.Collections.ArrayList();
;
// The SysModelElementType table contains the element types
// and we need the recId for the next selection
select firstonly RecId
from elementType
where elementType.Name == 'Table';
// With the recId of the table element type,
// select all of the elements with that type (hence, select all of the tables)
while select Name
from element
where element.ElementType == elementType.RecId
{
tableNames.Add(element.Name);
}
return tableNames;
}
}
Alright, I have tried a lot of things and in the end, I decided to create a table consisting of all table names. This table will have a Job populating it. I am fetching records from this table.

Entity insert appears to succeed, but doesn't show up in queries

I have a very simple row that I'm inserting using Entity, which I do like so:
var context = GetEntityContext();
SOMEPOCO newobj = new SOMEPOCO
{
Data = data
};
context.SOMEOBJECTS.Add(newobj);
context.SaveChanges();
return newobj.ID;
And newobj.ID (the auto-incremented primary key) is indeed populated. No errors are raised or exceptions thrown. But when I go to SQL Management Studio and query for items or look it up in code, it doesn't show up. But if I manually make an entry in the DB, it increments the primary key as though the previous failed entry were there.
What could be causing this?
Thanks.

Resources