During development structures and requirements change. Key and index settings need to be changed, that might break incremental table update. So my solution so far is to delete the table and recreate it from the cloudformation stack.
But how to solve this problem with a production deployment? Is it possible to automate dynamodb deployment as follows?
Create new table
Migrate data from old table to new table
Delete old table
Yes, it is perfectly possible to automate such a deployment structure. As long as you have code to create a table, it should be fairly straightforward to get all of the data from an old table, change the data, and then upload it all to a new table without any drops in up-time. If you write what language you would like to do such a thing in I can help a bit more.
I've done this before and I've added below a small generified code-sample on how you could do this in Java.
Java method for creating a table given the class of the object type stored in dynamo:
/**
* Creates a single table with its appropriate configuration (CreateTableRequest)
*/
public void createTable(Class tableClass) {
DynamoDBMapper mapper = createMapper(); // you'll need your own function to do this.
ProvisionedThroughput pt = new ProvisionedThroughput(1L, 1L);
CreateTableRequest ctr = mapper.generateCreateTableRequest(tableClass);
ctr.withProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
// Provision throughput and configure projection for secondary indices.
if (ctr.getGlobalSecondaryIndexes() != null) {
for (GlobalSecondaryIndex idx : ctr.getGlobalSecondaryIndexes()) {
if (idx != null) {
idx.withProvisionedThroughput(pt).withProjection(new Projection().withProjectionType("ALL"));
}
}
}
TableUtils.createTableIfNotExists(client, ctr);
}
Java method to delete table:
private static void deleteTable(String tableName) {
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable(tableName);
try {
System.out.println("Issuing DeleteTable request for " + tableName);
table.delete();
System.out.println("Waiting for " + tableName + " to be deleted...this may take a while...");
table.waitForDelete();
}
catch (Exception e) {
System.err.println("DeleteTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
I would scan the whole table and plop all of the content into a List and then map through that list, converting the objects into your new type, and then create a new table of that type but with a different name, push all of your new objects, and then delete the old table after switching any references you might have of the old table to the new one. Unfortunately this does mean that everything consuming your tables are going to need to be able to switch between your two staging tables.
Related
I'm looking for a way to enable logging changes for certain tables.
I have tried and tested adding tables to database log programatically, but with various success so far - sometimes it works sometimes it doesn't (mostly it does not) - it seems simply inserting rows into DatabaseLog table doesn't quite do the trick.
What I have tried:
Adding row with proper tableId, fieldId, logType and . Domain had been assigned as 'Admin', main company, empty field and subcompanies with the same result.
I have created class that handles inserts, main two functions are:
public static void InsertBase(STR tableName, domainId _domain='Admin')
{
//base logging for insert, delete, uptade on fieldid=0
DatabaseLog DBDict;
TableId _tableId;
DatabaseLogType _logType;
fieldId _fieldId =0;
List logTypes;
int i;
ListEnumerator enumerator;
;
_tableId= tableName2id(tableName);
logTypes = new List(Types::Enum);
logTypes.addEnd(DatabaseLogType::Insert);
logTypes.addEnd(DatabaseLogType::Update);
logTypes.addEnd(DatabaseLogType::Delete);
logTypes.addEnd(DatabaseLogType::EventInsert);
logTypes.addEnd(DatabaseLogType::EventUpdate);
logTypes.addEnd(DatabaseLogType::EventDelete);
enumerator = logTypes.getEnumerator();
while(enumerator.moveNext())
{
_logType = enumerator.current();
select * from dbdict where
dbdict.logTable==_tableId && dbdict.logField==_fieldId
&& dbdict.logType==_logType;
if(!dbDict) //that means it doesnt exist
{
dbdict.logTable=_tableId;
dbdict.logField=_fieldId;
dbdict.logType=_logType;
dbdict.domainId=_domain;
dbdict.insert();
}
}
info("Success");
}
and the method that lists every single field and adds as logType::Update
public static void init(str TableName, DomainId domain='Admin')
{
DatabaseLogType logtype;
int i;
container kk, ll;
DatabaseLog dblog;
tableid _tableId;
fieldid _fieldid;
;
logtype = DatabaseLogType::Update;
//holds a container of not yet added table fields to databaselog
kk = BLX_AddTableToDatabaseLog::buildFieldList(logtype,TableName);
for(i=1; i <= conlen(kk);i++)
{
ll = conpeek(kk,i);
_tableid = tableName2id(tableName);
_fieldid = conpeek(ll,1);
info(strfmt("%1 %2", conpeek(ll,1),conpeek(ll,2)));
dblog.logType=logType;
dblog.logTable = _tableId;
dblog.domainId = domain;
dblog.logField =_fieldid;
dblog.insert();
}
}
result:
What am I missing ?
#EDIT with some additional info
Does not work for SalesTable and SalesLine, WMSBillOfLading.
I couldn't add log for SalesTable and SalesLine by using wizard in administration panel, but my colleague somehow did (she has done exactly the same things as me). We also tried to add log to various other tables and we often found out that she could while I could not and vice versa (and sometimes none managed to do it like in case of WMSBillOfLading table).
The inconsistency of this mechanism is what drove me to write this code, which I hoped would solve all the problems.
After doing your setup changes you probably have to call
SysFlushDatabaseLogSetup::main();
in order to flush any caches.
This method is also called in the standard AX code in the form method SysDatabaseLogTableSetup\Methods\close and in the class method SysDatabaseLogWizard\doRun.
When creating a user, entries are required in multiple tables. I am trying to create a transaction that creates a new entry into one table and then pass the new entityid into the parent table and so on. The error I am getting is
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I believe this is caused by creating multiple connections within a single TransactionScope, but I am unsure on what the best/most efficient way of doing this is.
[OperationBehavior(TransactionScopeRequired = true)]
public int CreateUser(CreateUserData createData)
{
// Create a new family group and get the ID
var familyGroupId = createData.FamilyGroupId ?? CreateFamilyGroup();
// Create the APUser and get the Id
var apUserId = CreateAPUser(createData.UserId, familyGroupId);
// Create the institution user and get the Id
var institutionUserId = CreateInsUser(apUserId, createData.AlternateId, createData.InstitutionId);
// Create the investigator group user and return the Id
return AddUserToGroup(createData.InvestigatorGroupId, institutionUserId);
}
This is an example of one of the function calls, all the other ones follow the same format
public int CreateFamilyGroup(string familyGroupName)
{
var familyRepo = _FamilyRepo ?? new FamilyGroupRepository();
var familyGroup = new FamilyGroup() {CreationDate = DateTime.Now};
return familyRepo.AddFamilyGroup(familyGroup);
}
And the repository call for this is as follows
public int AddFamilyGroup(FamilyGroup familyGroup)
{
using (var context = new GameDbContext())
{
var newGroup = context.FamilyGroups.Add(familyGroup);
context.SaveChanges();
return newGroup.FamilyGroupId;
}
}
I believe this is caused by creating multiple connections within a single TransactionScope
Yes, that is the problem. It does not really matter how you avoid that as long you avoid it. A common thing to do is to have one connection and one EF context per WCF request. You need to find a way to pass that EF context along.
The method AddFamilyGroup illustrates a common anti-pattern with EF: You are using EF as a CRUD facility. It's supposed to me more like a live object graph connected to the database. The entire WCF request should share the same EF context. If you move in that direction the problem goes away.
I am using the following code in X++ to get table names:
client server public static container tableNames()
{
tableId tableId;
int tablecounter;
Dictionary dict = new Dictionary();
container tableNamesList;
for (tablecounter=1; tablecounter<=dict.tableCnt(); tablecounter++)
{
tableId = dict.tableCnt2Id(tablecounter);
tableNamesList = conIns(tableNamesList,1,dict.tableName(tableId));
}
return tableNamesList;
}
Business connector code :
tablesList = (AxaptaContainer)Global.ax.
CallStaticClassMethod("Code_Generator", "tableNames");
for (int i = 1; i <= tablesList.Count; i++)
{
tableName = tablesList.get_Item(i).ToString();
tables.Add(tableName);
}
The application hangs for 2 - 3 minutes while fetching data. What could be the cause? Any optimizations?
Rather than use ConIns, use +=, it will be faster
tableNamesList += dict.tableName(tableId);
ConIns has to work out where in the container to place the insert. += just adds it to the end
As mentioned before avoid conIns() when appending elements to a container because it makes a new copy of the container. Use += instead to append in place.
Also, you may want to check for permissions and leave out temporary tables, table maps, and other special cases. Standard Ax has a method to build a table name lookup form that takes these things into account. Check the method Global::pickTable() for details.
You could avoid some calls through the business connector as well and build the entire list in Ax in a similar way and return that in a single function call.
If you are using Dynamics Ax 2012, you could skip the treeNode stuff and use the SysModelElement table to fetch the data and return it immediately as a .Net Array to easy up things on the other side.
public static System.Collections.ArrayList FetchTableNames_ModelElementTables()
{
SysModelElement element;
SysModelElementType elementType;
System.Collections.ArrayList tableNames = new System.Collections.ArrayList();
;
// The SysModelElementType table contains the element types
// and we need the recId for the next selection
select firstonly RecId
from elementType
where elementType.Name == 'Table';
// With the recId of the table element type,
// select all of the elements with that type (hence, select all of the tables)
while select Name
from element
where element.ElementType == elementType.RecId
{
tableNames.Add(element.Name);
}
return tableNames;
}
}
Alright, I have tried a lot of things and in the end, I decided to create a table consisting of all table names. This table will have a Job populating it. I am fetching records from this table.
I have two tables in my databse viz.PurchaseOrderMST and SiteTRS. The primary key of PurchaseOrderMST is foreign key in SiteTRS. The case is first data is inserted in PurchaseOrderMST and then SiteTRS using two individual SPs. I want to maintain transaction during insertion on these two tables from the asp.net(C#). If data in the first table is inserted successfully then and only then data in the second tables should be inserted. If insertion fails in second tables, the insertion in the first tables should also rollback.
How can I do this using transaction mechanism of asp.net???
This is not really related to ASP.NET, but generally how transactions work in the .NET framework. With SqlClient, this is how you do it:
using (var connection = new SqlConnection("your connectionstring"))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
try
{
using (var command1 = new SqlCommand("SP1Name", connection, transaction))
{
command1.ExecuteNonQuery();
}
using (var command2 = new SqlCommand("SP2Name", connection, transaction))
{
command2.ExecuteNonQuery();
}
transaction.Commit();
}
catch
{
transaction.Rollback();
throw;
}
}
}
You of course need to add Parameters to the SqlCommand objects before you execute them, and which execute method you use (ExecuteNonQuery(), ExecuteScalar() or ExecuteReader()) depends on whether your stored procedures actually returns any data or not.
I am using ASP.NET MVC2 in Visual Studio 2008. I believe the SQL Server is 2005.
I have two tables: EquipmentInventory and EquipmentRequested
EquipmentInventory has a primary key
of sCode
EquipmentRequested has a
foreign key called sCode based upon
sCode in EquipmentInventory.
I am trying the following code (lots of non-relevent code removed):
try
{
EChODatabaseConnection myDB = new EChODatabaseConnection();
//this section of code works fine. The data shows up in the database as expected
foreach (var equip in oldData.RequestList)
{
if (equip.iCount > 0)
{
dbEquipmentInventory dumbEquip = new dbEquipmentInventory();
dumbEquip.sCode = equip.sCodePrefix + newRequest.iRequestID + oldData.sRequestor;
myDB.AddTodbEquipmentInventorySet(dumbEquip);
}
}
myDB.SaveChanges(); //save this out immediately so we can add in new requests
//this code runs fine
foreach (var equip in oldData.RequestList)
{
if (equip.iCount > 0)
{
dbEquipmentRequested reqEquip = new dbEquipmentRequested();
reqEquip.sCode = equip.sCodePrefix + newRequest.iRequestID + oldData.sRequestor;
myDB.AddTodbEquipmentRequestedSet(reqEquip);
}
}
//but when I try to save the above result, I get an error
myDB.SaveChanges();
oldData is passed into the function. newRequest is the result of adding to a "non-related" table. newRequest.iRequestID does have a value.
In looking at the reqEquip is the watch window, I do notice that EquipInventory is null.
The error message I receive is:
"Entities in 'EChODatabaseConnection.dbEquipmentRequestedSet' participate in the 'FK_EquipmentRequested_EquipmentInventory_sCode' relationship. 0 related 'EquipmentInventory' were found. 1 'EquipmentInventory' is expected."
Obviously I'm doing something wrong but thus far, I can not seem to find where I am having a problem.
Anyone have some hints on how to properly insert a record into a table that has a foreign key reference?
UPDATE:
I am using the Data Entity Framework.
UPDATE:
Thanks to Rob's answer, I was able to figure out my error.
As Rob mentioned, I needed to set my reference for the foreign key.
My coding result looks like:
foreach (var equip in oldData.RequestList)
{
if (equip.iCount > 0)
{
dbEquipmentInventory dumbEquip = new dbEquipmentInventory();
dumbEquip.sCode = equip.sCodePrefix + newRequest.iRequestID + oldData.sRequestor;
myDB.AddTodbEquipmentInventorySet(dumbEquip);
//add in our actual request items
dbEquipmentRequested reqEquip = new dbEquipmentRequested();
reqEquip.EquipmentInventory = dumbEquip;
myDB.AddTodbEquipmentRequestedSet(reqEquip);
}
}
myDB.SaveChanges();
Does anyone see a better method for doing this?
What are you using as an ORM? I believe that regardless of which one you're using, you could use the foreign key handling of most ORMs to handle this for you. For example, you make a new dumbEquip, don't do the immediate save. Do your dbEquipmentRequested reqEquip = new dbEquipmentRequested(); and add the data to it and then say dumbEquip.dbEquipmentRequested.Add(reqEquip). Then save the record and the ORM should save the records in the correct order required for the FK and even enter the FK ID into the reqEquip record.