I encountered something really puzzling in my development work.
removePreviousFoodMenuItems(oldRefList);
shFood.setNewFoodMenuItems(newRefList);
em.merge(shFood);
em.flush(); //Error occurs
If I call removePreviousFoodMenuItems before merge, I will get a "Cannot merge an entity that has already been removed" exception at runtime. However, this should not occur because I have set shFood to reference a new set of food menu items (newRefList). So why is merge still trying to merge the oldRefList elements that have already been removed? This problem does not occur if I put removePreviousFoodMenuItems after the flush statement.
shFood.setNewFoodMenuItems(newRefList);
em.merge(shFood);
em.flush(); //Error does not occur
removePreviousFoodMenuItems(oldRefList);
Below is the code for the removePreviousFoodMenuItems
public void removePreviousFoodMenuItems(ArrayList<FoodMenuItem> oldRefList){
for (Object f : oldRefList) {
FoodMenuItem foodMenuItem = (FoodMenuItem) f;
foodMenuItem.setStakeholderFood(null);
foodMenuItem.setPhotoEntity(null);
em.remove(foodMenuItem);
//em.flush();
}//end for
}//end removePreviousFoodMenuItems
Would really appreciate some advice on this!
UPDATE: How the newRefList is created:
StakeholderFood stakeholder = em.find(StakeholderFood.class, stakeholderID);
ArrayList<FoodMenuItem> newRefList = new ArrayList<FoodMenuItem>();
for (Object o : menuItem) {
FoodMenuItem fmi = (FoodMenuItem) o;
FoodMenuItem newFmi = new FoodMenuItem();
String previousName = fmi.getItemName();
newFmi.setItemName(previousName);
newFmi.setItemPrice(fmi.getItemPrice());
newFmi.setPhotoEntity(fmi.getPhotoEntity());
//Upload the photos for each item attached to menuItem
Photo photo = fmi.getPhotoEntity();
if(photo!=null){
photo.setFoodmenuItem(newFmi); //set new relationship, break off with old
em.merge(photo); //This will merge newFmi as well Fix this tomorrow
em.flush(); //update the links immediately
}
if (photo != null && fmi.getContainsImage() == Boolean.FALSE) {
uploadFoodMenuItemImages(photo);
newFmi.setPhotoEntity(photo);
newFmi.setContainsImage(Boolean.TRUE);
newFmi.setRenderedImage(Boolean.FALSE);
newFmi.setRenderedImageAltText(Boolean.FALSE);
}//end photo
else {
newFmi.setRenderedImageAltText(Boolean.TRUE);
}
newFmi.setStakeholderFood(stakeholder);
newRefList.add(newFmi);
}//end for
You have one or more same instances of FoodMenuItem in both oldRefList and newRefList. Applying remove to all items in oldRefList then causes some of the entities in newRefList to become removed.
Consequence is that shFood holds such a list where at least one FoodMenuItem is removed. If perform flush before removal, then on the moment when flush takes place, there is no such a problem, because shFood does not reference to removed instances.
Related
I'm looking for a way to enable logging changes for certain tables.
I have tried and tested adding tables to database log programatically, but with various success so far - sometimes it works sometimes it doesn't (mostly it does not) - it seems simply inserting rows into DatabaseLog table doesn't quite do the trick.
What I have tried:
Adding row with proper tableId, fieldId, logType and . Domain had been assigned as 'Admin', main company, empty field and subcompanies with the same result.
I have created class that handles inserts, main two functions are:
public static void InsertBase(STR tableName, domainId _domain='Admin')
{
//base logging for insert, delete, uptade on fieldid=0
DatabaseLog DBDict;
TableId _tableId;
DatabaseLogType _logType;
fieldId _fieldId =0;
List logTypes;
int i;
ListEnumerator enumerator;
;
_tableId= tableName2id(tableName);
logTypes = new List(Types::Enum);
logTypes.addEnd(DatabaseLogType::Insert);
logTypes.addEnd(DatabaseLogType::Update);
logTypes.addEnd(DatabaseLogType::Delete);
logTypes.addEnd(DatabaseLogType::EventInsert);
logTypes.addEnd(DatabaseLogType::EventUpdate);
logTypes.addEnd(DatabaseLogType::EventDelete);
enumerator = logTypes.getEnumerator();
while(enumerator.moveNext())
{
_logType = enumerator.current();
select * from dbdict where
dbdict.logTable==_tableId && dbdict.logField==_fieldId
&& dbdict.logType==_logType;
if(!dbDict) //that means it doesnt exist
{
dbdict.logTable=_tableId;
dbdict.logField=_fieldId;
dbdict.logType=_logType;
dbdict.domainId=_domain;
dbdict.insert();
}
}
info("Success");
}
and the method that lists every single field and adds as logType::Update
public static void init(str TableName, DomainId domain='Admin')
{
DatabaseLogType logtype;
int i;
container kk, ll;
DatabaseLog dblog;
tableid _tableId;
fieldid _fieldid;
;
logtype = DatabaseLogType::Update;
//holds a container of not yet added table fields to databaselog
kk = BLX_AddTableToDatabaseLog::buildFieldList(logtype,TableName);
for(i=1; i <= conlen(kk);i++)
{
ll = conpeek(kk,i);
_tableid = tableName2id(tableName);
_fieldid = conpeek(ll,1);
info(strfmt("%1 %2", conpeek(ll,1),conpeek(ll,2)));
dblog.logType=logType;
dblog.logTable = _tableId;
dblog.domainId = domain;
dblog.logField =_fieldid;
dblog.insert();
}
}
result:
What am I missing ?
#EDIT with some additional info
Does not work for SalesTable and SalesLine, WMSBillOfLading.
I couldn't add log for SalesTable and SalesLine by using wizard in administration panel, but my colleague somehow did (she has done exactly the same things as me). We also tried to add log to various other tables and we often found out that she could while I could not and vice versa (and sometimes none managed to do it like in case of WMSBillOfLading table).
The inconsistency of this mechanism is what drove me to write this code, which I hoped would solve all the problems.
After doing your setup changes you probably have to call
SysFlushDatabaseLogSetup::main();
in order to flush any caches.
This method is also called in the standard AX code in the form method SysDatabaseLogTableSetup\Methods\close and in the class method SysDatabaseLogWizard\doRun.
Current project:
ASP.NET 4.5.2
MVC 5
I have the following update code for an entry:
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> EditClient(ModifyClientViewModel model) {
Clients clients = await db.Clients.FindAsync(new Guid(model.ClientId));
if(clients == null) { return HttpNotFound(); }
try {
if(ModelState.IsValid) {
TextInfo ti = CultureInfo.CurrentCulture.TextInfo;
clients.ClientName = ti.ToTitleCase(model.ClientName.Trim());
clients.CityId = new Guid(model.CityId);
clients.SortOrder = MakeRoom(model.SortOrder, model.ClientId);
clients.Active = model.Active;
clients.Modified = DateTime.UtcNow;
clients.TouchedBy = User.GetClaimValue("UserGuid");
db.Entry(clients).State = EntityState.Modified;
await db.SaveChangesAsync();
return RedirectToAction("Index");
}
} catch(DbUpdateConcurrencyException ex) {
// removed
}
return View(model);
}
The important line is Line 11, which calls MakeRoom(), where I need to be able to shift the sort order of the entry in question, assuming it is changed.
Now, a word about the sort order: It is a short (smallint) column in SQL Server 2012, because there will probably never be more than a few hundred entries anyhow. These are sequential numbers from 1 on up. There will be no gaps in the number sequence (deletions will pull everyone else down), so pulling the max value from that column will also describe the number of rows there are. While two rows can have the same SortOrder (the column allows duplicates for the sorting code to function), this is not supposed to be a persistent state -- it will exist only as long as the actual sort code is running. Once the sorting is done, no duplicate should exist.
If anyone remembers their first-year programming classes, this recursion should be analogous to a “bubble sort” where an element bubbles up to where it is supposed to be. Except here we are “bubbling” by actually changing the row’s SortOrder value.
I have some code that is supposed to be a recursive algorithm that takes the desired position, and the ID of the current position, and brings the current position close to the desired position by swapping with the next closer item before looping back onto itself. Once current = desired, the recursion is supposed to end, punting the desired value back to the code above.
Here is what I have made:
public short MakeRoom(short newSort, string id) {
var currentClient = db.Clients.Find(new Guid(id));
short currentSort = currentClient.SortOrder;
if(currentSort != newSort) { // need to shift the current value
short shiftSort = (currentSort < newSort ? currentSort++ : currentSort--); //are we shifting up or down?
var shiftClient = db.Clients.Where(x => x.SortOrder == shiftSort).First();
Clients clients = db.Clients.Find(shiftClient.ClientId);
clients.SortOrder = currentSort; // give the next row the current client's sort number
db.Entry(clients).State = EntityState.Modified;
db.SaveChanges();
currentClient.SortOrder = shiftSort; // give the current row the next row's sort number
db.Entry(currentClient).State = EntityState.Modified;
db.SaveChanges();
MakeRoom(newSort, id); //One swap done, proceed to the next swap.
}
return newSort;
}
As you can see, my “base condition” is whether current = desired, and if it does match, all the code is supposed to be ignored in favour of the return statement. If it doesn’t match, the code executes one shift, and then calls itself to conduct the next shift (re-evaluating the current sort value via the ID of the current client because the current sort number is now different due to the just-executed prior shift). Once all shifts are done and current = desired, the code exits with the return statement.
I was hoping someone could examine my logic and see if it is where my problem lies. I seem to be having an infinite loop that doesn’t actually touch the DB, because none of the values in the DB actually get altered -- IIS just ends up crashing.
Edit 1: Found the stack overflow. Turns out the problem is with
var shiftClient = db.Clients.Where(x => x.SortOrder == shiftSort).First();
Problem is, I am not sure why. In the prior line, I had set shiftSort to be one off (depending on the direction) from the current sort. I then want to grab the ClientID via this shiftSort value (which is the SortOrder). Since there should be only one such SortOrder value in the column, I should be able to do a search for it using the line above. But apparently it throws a stack overflow.
So, to be specific: Let's say I went after a client with a SortOrder of 53. I want him to end up with a SortOrder of 50. The prior line takes that 50 (the newSort) and discovers that it is less than the currentSort, so it assigns the shiftSort a value of currentSort-- (53 - 1 = 52). The line above is supposed to take that value of 52, and return a row where that 52 exists, so that on the following rows the ClientID of 52 can be used to modify that line to be 53 (the swap).
Suggestions? I am not understanding why I am experiencing a stack overflow here.
Edit 2: Revamped by MakeRoom method, but I am still experiencing a Stack Overflow on the affected line:
public short MakeRoom(short newSort, string id) {
Clients currentClient = db.Clients.Find(new Guid(id));
short currentSort = currentClient.SortOrder;
if(currentSort != newSort) { // need to shift the current sort
short shiftSort = (currentSort < newSort ? currentSort++ : currentSort--);
Clients shiftClient = db.Clients.Where(x => x.SortOrder == shiftSort).FirstOrDefault(); //Stack Overflow is here -- why??
shiftClient.SortOrder = currentSort;
db.Entry(shiftClient).State = EntityState.Modified;
currentClient.SortOrder = shiftSort;
db.Entry(currentClient).State = EntityState.Modified;
db.SaveChanges();
MakeRoom(newSort, id);
}
return newSort;
}
Edit 3: I have altered my MakeRoom method again:
public void MakeRoom(short newSort, string id) {
var currentClient = db.Clients.Find(new Guid(id));
short currentSort = currentClient.SortOrder;
if(currentSort < newSort) {
var set = db.Clients.Where(x => x.SortOrder > currentSort && x.SortOrder <= newSort).OrderBy(c => c.SortOrder).ToList();
short i = set.First().SortOrder;
set.ForEach(c => {
c.SortOrder = i--;
c.Modified = DateTime.UtcNow;
c.TouchedBy = User.GetClaimValue("UserGuid");
db.Entry(c).State = EntityState.Modified;
});
db.SaveChanges();
} else if(currentSort > newSort) {
var set = db.Clients.Where(x => x.SortOrder >= newSort && x.SortOrder < currentSort).OrderBy(c => c.SortOrder).ToList();
short i = set.First().SortOrder;
set.ForEach(c => {
c.SortOrder = i++;
c.Modified = DateTime.UtcNow;
c.TouchedBy = User.GetClaimValue("UserGuid");
db.Entry(c).State = EntityState.Modified;
});
db.SaveChanges();
}
}
But even though the debugger clearly walks through the code and in the right way, the actual DB values do not get changed.
Let the database sort this for you.
var first = db.Clients.FirstOrDefault(c => c.Guid == guid);
var set = db.Clients
.Where(c => c.SortOrder >= first.SortOrder)
.OrderBy(c => c.SortOrder).ToList();
int i = set.First().SortOrder; // or 1
set.ForEach(c => {
c.SortOrder = i++;
db.Entry(c).State = EntityState.Modified;
});
db.SaveChanges();
Oh. My. Goodness. Talk about overlooking the single screw that caused the house to collapse.
My problem was not with the line that was causing the stack overflow -- my problem was with the line immediately prior:
short shiftSort = (currentSort < newSort ? currentSort++ : currentSort--);
Can you see the problem? I didn’t. Until now. While attempting to implement the solution provided by Jasen (thank you for your efforts, Good Sir, they were greatly appreciated), I was forced to go through my code a bit more carefully, and I noticed something strange: the increments/decrements. Those are supposed to be for altering the actual value of the item being incremented/decremented, even if it is part of an assignment. So no wonder my script was f**cking up -- I was changing the value of currentSort at the same time I was assigning it to shiftSort.
What I did was two-fold: since there wasn’t any real need to pass a value back, I altered the original HttpPost method as such:
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> EditClient(ModifyClientViewModel model) {
Clients clients = await db.Clients.FindAsync(new Guid(model.ClientId));
if(clients == null) { return HttpNotFound(); }
try {
if(ModelState.IsValid) {
MakeRoom(model.SortOrder, clients.SortOrder);
TextInfo ti = CultureInfo.CurrentCulture.TextInfo;
clients.ClientName = ti.ToTitleCase(model.ClientName.Trim());
clients.CityId = new Guid(model.CityId);
clients.SortOrder = model.SortOrder;
clients.Active = model.Active;
clients.Modified = DateTime.UtcNow;
clients.TouchedBy = User.GetClaimValue("UserGuid");
db.Entry(clients).State = EntityState.Modified;
await db.SaveChangesAsync();
return RedirectToAction("Index");
}
} catch(DbUpdateConcurrencyException ex) {
// Ignore
}
return View(model);
}
Notice how the MakeRoom() is moved to the front, and given only the Source and Destination values? No more need to pass an ID.
Then for the actual method:
public void MakeRoom(short newSort, short oldSort) {
if(oldSort != newSort) { // need to shift the current sort
short shiftSort = (oldSort < newSort ? (short)(oldSort + 1) : (short)(oldSort - 1));
Clients currentClient = db.Clients.Where(x => x.SortOrder == oldSort).FirstOrDefault();
currentClient.SortOrder = shiftSort;
db.Entry(currentClient).State = EntityState.Modified;
Clients shiftClient = db.Clients.Where(x => x.SortOrder == shiftSort).FirstOrDefault();
shiftClient.SortOrder = oldSort;
db.Entry(shiftClient).State = EntityState.Modified;
db.SaveChanges();
MakeRoom(newSort, shiftSort);
}
}
Now look at the assignment to shiftSort - I assign it the value of oldSort modified by a single digit. This, I am shamed to say, has made all the difference.
My code now works perfectly, and instantly even over many dozens of intermediary items. I can take an item that had a SortOrder of 3, and move it to a SortOrder of 53, and everything from 53 down to 4 gets shifted down one perfectly to make room (at SortOrder 53) for the item that formerly had a SortOrder of 3.
The nice thing about this method is that it can be used equally well for deletions (to prevent gaps in the numbering) and additions (when you want the new item somewhere other than the end). For deletions you just shift everything after the deleted item (say, SortOrder 33 to Max SortOrder) down one, and for additions you shift everything from the insertion point to the Max SortOrder up one, then insert your value.
I hope this helps anyone who comes after me, and the fates help them with Google -- they will get reams of results that talk about sorting and paging output, and precious little about changing the value of the sort order for each entry between a source and a destination value.
I am using Spring 3, iReport, JasperReports 4.5.0 to generate the reports. I have three tables like below.
TableNameFields
DN date
DNProd prodName, prodQty
DNPay cost, totalCost
The problem is I need to show date, prodName, prodQty, cost, totalCost fields in a single report. But these are pointing to different POJO classes.I have searched in google for this and found some solution like use sub reports.
But as i am new to these reports i don't know whether it is correct solution or not. Can any one point me in the right direction with any sample if possible.
As per my suggestion DynamicJasper suits best in your situation.
You need to right and HQL that will get all the fields described by you in the question from the respective POJOs with proper joins.
And on executing this HQL you will get the List. That you have to pass to DynamicJasper to build the report for you. It will automatically get the column names from the pojo field names.
Below is the example for the same.
Session session = null;
try {
SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
session = sessionFactory.openSession();
session.beginTransaction();
List list = session.createQuery("from Employee").list();
session.getTransaction().commit();
DynamicReport dynamicReport = new ReflectiveReportBuilder(list).build();
dynamicReport.setTitle("List of Employees");
JasperPrint jasperPrint = DynamicJasperHelper.generateJasperPrint(dynamicReport, new ClassicLayoutManager(), list);
JasperViewer.viewReport(jasperPrint);
JasperExportManager.exportReportToPdfFile(jasperPrint, "C:\\TestDynamicJasper.pdf");
resp.getWriter().write("Welcome to Show Report");
resp.getWriter().flush();
resp.getWriter().close();
} catch (JRException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ColumnBuilderException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} finally {
if(session != null)
session.close();
}
Hope this helps you. :)
We are rying to use WF with multiple tracking participants which essentially listen to different queries - one for activity states, one for custom tracknig records which are a subclass of CustomTrackingRecord.
The problem is that we can use both TrackingParticipants indivisually, but not together - we never get our subclass from CustomTrackingRecord but A CustomTrackingRecord.
If I put bopth queries into one TrackingParticipant and then handle everythign in one, both work perfectly (which indicates teh error is not where we throw them).
The code in question for the combined one is:
public WorkflowServiceTrackingParticipant ()
{
this.TrackingProfile = new TrackingProfile()
{
ActivityDefinitionId = "*",
ImplementationVisibility = ImplementationVisibility.All,
Name = "WorkflowServiceTrackingProfile",
Queries = {
new CustomTrackingQuery() { Name = "*", ActivityName = "*" },
new ActivityStateQuery() {
States = {
ActivityStates.Canceled,
ActivityStates.Closed,
ActivityStates.Executing,
ActivityStates.Faulted
}
},
}
};
}
When using two TrackingParticipants we have two TrackingProfile (with different names) that each have one of the queries.
in the track method, when using both separate, the lines:
protected override void Track(TrackingRecord record, TimeSpan timeout)
{
Console.WriteLine("*** ActivityTracking: " + record.GetType());
if (record is ActivityBasedTrackingRecord)
{
System.Diagnostics.Debugger.Break();
}
never result in the debugger hitting, when using only the one to track our CustomTrackingRecord subclass (ActivityBasedTrackingRecord) then it works.
Anyone else knows about this? For now we have combined both TrackingParticipants into one, but this has the bad side effect that we can not dynamically expand the logging possibilities, which we would love to. Is this a known issue with WWF somewhere?
Version used: 4.0 Sp1 Feature Update 1.
I guess I encounterad the exact same problem.
This problem occurs due to the restrictions of the extension mechanism. There can be only one instance per extension type per workflow instance (according to Microsoft's documentation). Interesting enough though, one can add multiple instances of the same type to one workflow's extensions which - in case of TrackingParticipant derivates - causes weird behavior, because only one of their tracking profiles is used for all participants of the respective type, but all their overrides of the Track method are getting invoked.
There is a (imho) ugly workaround to this: derive a new participant class from TrackingParticipant for each task (task1, task2, logging ...)
Regards,
Jacob
I think that this problem isn't caused by extension mechanism, since DerivedParticipant 1 and DerivedParticipant 2 are not the same type(WF internals just use polymorphism on the base class).
I was running on the same issue, my Derived1 was tracking records that weren't described in its profile.
Derived1.TrackingProfile.Name was "Foo" and Derived2.TrackingProfile.Name was null
I changed the name from null to "Bar" and it worked as expected.
Here is a WF internal reference code, describing how is the Profile selected
// System.Activities.Tracking.RuntimeTrackingProfile.RuntimeTrackingProfileCache
public RuntimeTrackingProfile GetRuntimeTrackingProfile(TrackingProfile profile, Activity rootElement)
{
RuntimeTrackingProfile runtimeTrackingProfile = null;
HybridCollection<RuntimeTrackingProfile> hybridCollection = null;
lock (this.cache)
{
if (!this.cache.TryGetValue(rootElement, out hybridCollection))
{
runtimeTrackingProfile = new RuntimeTrackingProfile(profile, rootElement);
hybridCollection = new HybridCollection<RuntimeTrackingProfile>();
hybridCollection.Add(runtimeTrackingProfile);
this.cache.Add(rootElement, hybridCollection);
}
else
{
ReadOnlyCollection<RuntimeTrackingProfile> readOnlyCollection = hybridCollection.AsReadOnly();
foreach (RuntimeTrackingProfile current in readOnlyCollection)
{
if (string.CompareOrdinal(profile.Name, current.associatedProfile.Name) == 0 && string.CompareOrdinal(profile.ActivityDefinitionId, current.associatedProfile.ActivityDefinitionId) == 0)
{
runtimeTrackingProfile = current;
break;
}
}
if (runtimeTrackingProfile == null)
{
runtimeTrackingProfile = new RuntimeTrackingProfile(profile, rootElement);
hybridCollection.Add(runtimeTrackingProfile);
}
}
}
return runtimeTrackingProfile;
}
I have two tables without any cascade deleting. I want to delete parent object with all child objects. I do it this way
//get parent object
return _dataContext.Menu.Include("ChildMenu").Include("ParentMenu").Include("Pictures").FirstOrDefault(m => m.MenuId == id);
//then i loop all child objects
var picList = (List<Picture>)menu.Pictures.ToList();
for (int i = 0; i < picList.Count; i++)
{
if (File.Exists(HttpContext.Current.Server.MapPath(picList[i].ImgPath)))
{
File.Delete(HttpContext.Current.Server.MapPath(picList[i].ImgPath));
}
if (File.Exists(HttpContext.Current.Server.MapPath(picList[i].ThumbPath)))
{
File.Delete(HttpContext.Current.Server.MapPath(picList[i].ThumbPath));
}
//**what must i do here?**
//menu.Pictures.Remove(picList[i]);
// DataManager dm = new DataManager();
// dm.Picture.Delete(picList[i].Id);
//menu.Pictures.de
//_dataContext.SaveChanges();
//picList[i] = null;
}
//delete parent object
_dataContext.DeleteObject(_dataContext.Menu.Include("ChildMenu").Include("ParentMenu")
.Include("Pictures").FirstOrDefault(m => m.MenuId == id););
_dataContext.SaveChanges();
It is enough to set the <OnDelete Action="Cascade" /> for the master association end in the CSDL part of the model.
In this case your code will work.
My situation was slightly different, and it took a while to get it right so I thought it worth documenting. I have two related tables, Quote and QuoteExtension:
Quote (Parent, Primary Key QuoteId)
QuoteExtension (Calculated fields for Quote, Primary and Foreign Key QuoteId)
I didn't have to set the OnDelete action to get it to work - but Craig's comment (if I could vote that up more I would!) led me to discover the issue. I was attempting to delete the Quote when QuoteExtension was not loaded. Therefore I found two ways that worked:
var quote = ent.Quote.Include("QuoteExtension").First(q => q.QuoteId == 2311);
ent.DeleteObject(quote);
ent.SaveChanges();
Or:
var quote = ent.Quote.First(q => q.QuoteId == 2311);
if (quote.QuoteExtension != null)
ent.Refresh(RefreshMode.ClientWins, quote.QuoteExtension);
ent.DeleteObject(quote);
ent.SaveChanges();
Interestingly trying to delete QuoteExtension manually didn't work (although it may have if I had included ent.SaveChanges() in the middle - this tends to happen only at the end of a unit of work in this system so I wanted something that didn't rely on this.