I have a window in a Qt application using PostgreSQL 9.3 database. The window is a form used do display, edit and insert new data. t looks like that:
I have data from 3 sql tables in that view. the tables are related with foreign keys:
contractors (main table) - mapped to "personal data" section
contacts (has foreign key to contractors.ID)
addresses (has foreign key to contractors.ID)
So - in my window's class I have 3 main models (+ 2 proxy models to transpose tables in "personal data" an "address data" sections). I use QSqlTableModel for theese sesctions, and a QSqlRelationalTableModel for contactData section. when opening that window "normally" (to view some contractor), i simply pass contractor's ID to the constructor and store it in proper variable. Also, I call the QSqlTableModel::​setFilter(const QString & filter) method for each of the models, and set the proper filtering. When opening that window in "add new" mode i simply pass a "-1" or "0" value to the ID variable, so no data gets loaded to the model.
All 3 models have QSqlTableModel::OnManualSubmit editStrategy. When saving the data (triggered by clicking a proper button), I start a transaction. And then I submit models one-by-one. personalData model gets submitted first, as I need to obtain it's PK after insert (to set in the FK fields in other models).
When submitting of the model fails, I show a messageBox with the QSqlError content, rollback the transaction and return from the method.
When I have an error on the first model being processed - no problem, as nothing was inserted. But when the first model is saved, but the second or third fails - there is a little problem. So I rollback the transacion as before, and return from the function. But after correcting the data and submitting it again - the first model is not trying to submit - as it doesn't know that there was a rollback, and the data needs to be inserted again. What would be a good way to notice such a model, that it needs to be submited once again?
At the moment I ended up with something like that:
void kontrahenciSubWin::on_btnContractorAdd_clicked() {
//QStringList errorList; // when error occurs in one model - whole transacion gets broken, so no need for a list
QString error;
QSqlDatabase db = QSqlDatabase::database();
//backup the data - in case something fails and we have to rollback the transaction
QSqlRecord personalDataModelrec = personalDataModel->record(0); // always one row. will get erased by SubmitAll, as no filter is set, because I don't have its ID.
QList<QSqlRecord> contactDataModelRecList;
for (int i = 0 ; i< contactDataModel->rowCount(); i++) {
contactDataModelRecList.append( contactDataModel->record(i) );
}
QList<QSqlRecord> addressDataModelRecList;
for (int i = 0 ; i< addressDataModel->rowCount(); i++) {
addressDataModelRecList.append( addressDataModel->record(i) );
}
db.transaction();
if ( personalDataModel->isDirty() && error.isEmpty() ) {
if (!personalDataModel->submitAll()) //submitAll calls select() on the model, which destroys the data as the filter is invalid ("where ID = -1")
//errorList.append( personalDataModel->lastError().databaseText() );
error = personalDataModel->lastError().databaseText();
else {
kontrahentid = personalDataModel->query().lastInsertId().toInt(); //only here can I fetch ID
setFilter(ALL); //and pass it to the models
}
}
if ( contactDataModel->isDirty() && error.isEmpty() )
if (!contactDataModel->submitAll()) //slot on_contactDataModel_beforeInsert() sets FK field
//errorList.append( contactDataModel->lastError().databaseText() );
error = contactDataModel->lastError().databaseText();
if ( addressDataModel->isDirty() && error.isEmpty() )
if (!addressDataModel->submitAll()) //slot on_addressDataModel_beforeInsert() sets FK field
//errorList.append( addressDataModel->lastError().databaseText() );
error = addressDataModel->lastError().databaseText();
//if (!errorList.isEmpty()) {
// QMessageBox::critical(this, tr("Data was not saved!"), tr("The following errors occured:") + " \n" + errorList.join("\n"));
if (!error.isEmpty()) {
QMessageBox::critical(this, tr("Data was not saved!"), tr("The following errors occured:") + " \n" + error);
db.rollback();
personalDataModel->clear();
contactDataModel->clear();
addressDataModel->clear();
initModel(ALL); //re-init models: set table and so on.
//re-add data to the models - backup comes handy
personalDataModel->insertRecord(-1, personalDataModelrec);
for (QList<QSqlRecord>::iterator it = contactDataModelRecList.begin(); it != contactDataModelRecList.end(); it++) {
contactDataModel->insertRecord(-1, *it);
}
for (QList<QSqlRecord>::iterator it = addressDataModelRecList.begin(); it != addressDataModelRecList.end(); it++) {
addressDataModel->insertRecord(-1, *it);
}
return;
}
db.commit();
isInEditMode = false;
handleGUIOnEditModeChange();
}
Does anyone have a better idea? I doubt if it's possible to ommit backing-up the records before trying to insert them. But maybe there is a better way to "re-add" them to the model? I tried to use "setRecord", and "remoweRows" & "insertRecord" combo, but no luck. Resetting the whole model seems easiest (I only need to re-init it, as it loses table, filter, sorting and everything else when cleared)
I suggest you to use a function written in the language PLPGSQL. It has one transaction between BEGIN and END. If it goes wrong at a certain point of the code then will it rollback all data flawlessly.
What you are doing now is not a good design, because you handle the control over a certain functionality (rollback) to an external system with regard to the rollback (it is happening in the database). The external system is not designed to do that, while the database on the contrairy is created and designed for dealing with rollbacks and transactions. It is very good at it. Rebuilding and reinventing this functionality, which is quite complex, outside the database is asking for a lot of trouble. You will never get the same flawless rollback handling as you will have using functions within the database.
Let each system do what it can do best.
I have met your problem before and had the same line of thought to work this problem out using Hibernate in my case. Until I stepped back from my efforts and re-evaluated the situation.
There are three teams working on the rollback mechanism of a database:
1. the men and women who are writing the source code of the database itself,
2. the men and women who are writing the Hibernate code, and
3. me.
The first team is dedicated to the creation of a good rollback mechanism. If they fail, they have a bad product. They succeeded. The second team is dedicated to the creation of a good rollback mechanism. Their product is not failing when it is not working in very complex situations.
The last team, me, is not dedicated to this problem. Who am I to write a better solution then the people of team 2 or team 1 based on the work of team 2 who were not able to get it to the level of team 1?
That is when I decided to use database functions instead.
Related
See Datasource Paging Issue (Revised)
for the original question.
Markus, you were kind enough to help with out with the issue of incorporating a record count into a query using a calculated datasource. I have a search form with 15 widgets - a mix of date ranges, dropdowns, text values and ._contains, ._equals, ._greaterThanOrEquals, ._lessThanOrEquals, etc.
I have tested this extensively against mySQL SQL code and it works fine.
I have now added a 16th parameter PropertyNames, which is a list with binding #datasource.query.filters.Property.PropertyName._in and Options blank. The widget on the form is hidden because it is only used for additional filtering.
Logic such as the following is used, such that a particular logged-in user can only view their own properties. So if they perform a search and the Property is not specified we do:-
if (params.param_Property === null && canViewAllRecords === false) {
console.log(params.param_PropertyNames); // correct output
ds.filters.Property.PropertyName._in = params.param_PropertyNames;
}
The record count (records.length) is correct, and if I for loop through the array of records the record set is correct.
However, on the results page the table displays a larger resultset which omits the PropertyNames filter. So if I was to search on Status 'Open' (mySQL results 50) and then I add a single value ['Property Name London SW45'] for params.param_PropertyNames the record count is 6, the records array is 6 but the datasource display is 50. So the datasource is not filtering on the property array.
Initially I tried without adding the additional parameter and form widget and just using code such as
if (params.param_Property === null && canViewAllRecords === false) {
console.log(params.param_PropertyNames); // correct output
ds.filters.Property.PropertyName._in = properties; // an array of
properties to filter out
}
But this didn't work, hence the idea of adding a form widget and an additional parameter to the calculated recordcount datasource.
If I inspect at query.parameters then I see:-
"param_Status": "Open",
"param_PropertyNames": ["Property Name London SW45"],
If I inspect query.filters:-
name=param_Status, value=Open
name=param_PropertyNames, value=[]}]}
It looks as though the filter isn't set. Even hard coding
ds.filters.Property.PropertyName._in = ['Property Name London SW45'],
I get the same reuslt.
Have you got any idea what would be causing this issue and what I can do for a workaround ?
Using a server side solution I would suggest editing both your SQL datasource query script (server side) that is supposed to filter by this property list and including the same code in your server side script for your calculated Count datasource. The code would look something like this, not knowing your exact details:
var subquery = app.models.Directory.newQuery();
subquery.filters.PrimaryEmail._equals = Session.getActiveUser().getEmail();
subquery.prefetch.Property._add();
var results = subquery.run();
if(!results[0].CanViewAllRecords) {
query.filters.Property.PropertyName._in = results[0].Property.map(function(i) {return i.PropertyName;});
}
By adding this code you are filtering your directory by your current user and prefetching the Property relation table, then you set the filter only if your user canviewallRecords is false and use JS map function to create an array of the PropertyName field in the Property table. As I stated, your code may not be exactly the same depending on how you have to retrieve your user canviewallrecords property and then of course I don't know your relation between user and Property table either, is it one-to-many or other. But this should give you an idea how to implement this on server side.
How can I access the results of a questionnaire/s (KMQuestionnaireRun type) of a client to export the results to an XML dynamically.
A sample of the class that I am working on:
while select rmlSomaticMeasures
outer join rmlSomatometryWorker
where rmlSomatometryWorker.RMLRef == rmlTable.RecId
&& rmlSomatometryWorker.SomaticMeasureId == rmlSomaticMeasures.SomaticMeasureId
{
if (rmlSomatometryWorker.Value)
{
nodeMeasure = doc.createElement(strReplace(strUpr(rmlSomaticMeasures.SomaticMeasureId)," ","_"));//.text(strReplace(strUpr(rmlPhysiologicalHabitWorker.Value)," ","_"));
nodeMeasure.text(strReplace(Num2Str(rmlSomatometryWorker.Value,0,5,1,0)," ",""));
nodeSOMATOMETRIA.appendChild(nodeMeasure);
}
else
{
nodeMeasure = doc.createElement(strReplace(strUpr(rmlSomaticMeasures.SomaticMeasureId)," ","_"));
nodeSOMATOMETRIA.appendChild(nodeMeasure);
}
}
Short answer, aka "The fish"
The results are stored in tables KMVirtualNetworkAnswerTable and KMVirtualNetworkAnswerLine
Long answer, aka "Let me tell you how to fish":
You already figured out that each time a questionnaire is done, it is processed by one of the subclasses of abstract class KMQuestionnaireRun. When I did one of the questionnaires in Contoso, I noticed that afterwards a little message pops up "The completed questionnaire has been saved". I figured that is a good place to start, so I jumped to the code line that produces that message (just select the message in the infolog and click "Edit"). This brought me to class KMQuestionnaireSave, method save (which is called by method close in class KMQuestionnaireRun). From there it is fairly easy to navigate to class KMQuestionnaireSaveResult, method saveAll and see how the above tables get written.
I figured this out in Version AX 2012 R2 CU7. I did not check other versions, but I would guess the data model to be similar or identical.
I have a very simple row that I'm inserting using Entity, which I do like so:
var context = GetEntityContext();
SOMEPOCO newobj = new SOMEPOCO
{
Data = data
};
context.SOMEOBJECTS.Add(newobj);
context.SaveChanges();
return newobj.ID;
And newobj.ID (the auto-incremented primary key) is indeed populated. No errors are raised or exceptions thrown. But when I go to SQL Management Studio and query for items or look it up in code, it doesn't show up. But if I manually make an entry in the DB, it increments the primary key as though the previous failed entry were there.
What could be causing this?
Thanks.
I want to send an alert in Ax, when any field in the vendor table changes (and on create/delete of a record).
In the alert, I would like to include the previous and current value.
But, it appears that you can't set alerts for when any field in a table changes, but need to set one up for EVERY FIELD?! I hope I am mistaken.
And how can I send this notification to a group of people
I have created a new class with a static method that I can easily call from any .update() method to alert me when a record changes, and what changed in the record.
It uses the built in email templates of Ax as well.
static void CompareAndEmail(str emailTemplateName, str nameField, str recipient, Common original, Common modified)
{
UserInfo userInfo;
Map emailParameterMap = new Map(Types::String, Types::String);
str changes;
int i, fieldId;
DictTable dictTable = new DictTable(original.TableId);
DictField dictField;
;
for (i=1; i<=dictTable.fieldCnt(); i++)
{
fieldId = dictTable.fieldCnt2Id(i);
dictField = dictTable.fieldObject(fieldId);
if (dictField.isSystem())
continue;
if (original.(fieldId) != modified.(fieldId))
{
changes += strfmt("%1: %2 -> %3 \n\r",
dictField.name(),
original.(fieldId),
modified.(fieldId)
);
}
}
//Send Notification Email
select Name from UserInfo where userInfo.id == curUserId();
emailParameterMap.insert("modifiedBy", userInfo.Name);
emailParameterMap.insert("tableName", dictTable.name());
emailParameterMap.insert("recordName", original.(dictTable.fieldName2Id(nameField)));
emailParameterMap.insert("recordChanges", changes);
SysEmailTable::sendMail(emailTemplateName, "en-us", recipient, emailParameterMap);
}
Then in the .update() method I just add this one line
//Compare and email differences
RecordChangeNotification::CompareAndEmail(
"RecChange", //Template to use
"Name", //Name field of the record (MUST BE VALID)
"user#domain.com", //Recipient email
this_Orig, //Original record
this //Modified record
);
The only things I want to improve upon are:
moving the template name and recipient into a table, for easier maintenance
better formatting for the change list, I don't know how to template that (see: here)
As you have observed the alert system is not designed for "any" field changes, only specific field changes.
This is a bogus request anyway as it would generate many alarts. The right thing to do is to enable database logging of the VendTable table, then send a daily report (in batch) to those interested.
This is done in Administration\Setup\Database logging. There is a report in Administration\Reports. You will need to know the table number to select the table.
This solution requires a "Database logging" license key.
If you really need this feature, then you can create a class that sends a message/email with the footprint of the old record vs the new record. Then simply add some code in the table method "write"/"update"/"save" to make sure you class is run whenever vendtable gets edited.
But I have to agree with Jan. This will generate a lot of alerts. I'd spend some energy checking if the modifications done in vendtable are according to the business needs, and prohibit illegal modifications. That includes making sure only the right people have enough access.
Good luck!
I do agree with suggestion of Skaue.you just write and class to send the mail of changes in vend table.
and execute this class on update method of vendtable.
thanks and Regards,
Deepak Kumar
Part of the web application I'm working on is an area displaying messages from management to 1...n users. I have a DataAccess project that contains the LINQ to SQL classes, and a website project that is the UI. My database looks like this:
User -> MessageDetail <- Message <- MessageCategory
MessageDetail is a join table that also contains an IsRead flag.
The list of messages is grouped by category. I have two nested ListView controls on the page -- One outputs the group name, while a second one nested inside that is bound to MessageDetails and outputs the messages themselves. In the code-behind for the page listing the messages I have the following code:
protected void MessageListDataSource_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
var db = new DataContext();
// parse the input strings from the web form
int categoryIDFilter;
DateTime dateFilter;
string catFilterString = MessagesCategoryFilter.SelectedValue;
string dateFilterString = MessagesDateFilter.SelectedValue;
// TryParse will return default values if parsing is unsuccessful (i.e. if "all" is selected"):
// DateTime.MinValue for dates, 0 for int
DateTime.TryParse(dateFilterString, out dateFilter);
Int32.TryParse(catFilterString, out categoryIDFilter);
bool showRead = MessagesReadFilter.Checked;
var messages =
from detail in db.MessageDetails
where detail.UserID == (int)Session["UserID"]
where detail.Message.IsPublished
where detail.Message.MessageCategoryID == categoryIDFilter || (categoryIDFilter == 0)
where dateFilter == detail.Message.PublishDate.Value.Date || (dateFilter == DateTime.MinValue)
// is unread, showRead filter is on, or message was marked read today
where detail.IsRead == false || showRead || detail.ReadDate.Value.Date == DateTime.Today
orderby detail.Message.PublishDate descending
group detail by detail.Message.MessageCategory into categories
orderby categories.Key.Name
select new
{
MessageCategory = categories.Key,
MessageDetails = categories.Select(d => d)
};
e.Result = messages;
}
This code works, but sticking a huge LINQ statement like this in the code-behind for a LinqDataSource control just doesn't sit right with me.
It seems like I'm still coding queries into the user interface, only now it's LINQ instead of SQL. However, I feel that building another layer between the L2S classes and the UI would cut back on some of the flexibility of LINQ. Isn't the whole point to reduce the amount of code you write to fetch data?
Is there some possible middle ground I'm not seeing, or am I just misunderstanding the way LINQ to SQL is supposed to be used? Advice would be greatly appreciated.
All your LINQ querys should be in a business logic class, no change from older methodologies like ADO.
If you are a purist you should always return List(of T) from your methods in the business class, in fact, the datacontext should only be visible to the business classes.
Then you can manipulate the list in the user interface.
If you are a pragmatist, you can return a IQueryable object and make some manipulations in the user interface.
Regardless of LINQ, I think that mixing presentation code with database-relaed code is not a good idea. I would create a simple DB abstraction layer on top of LINQ queries. In my opinion LINQ is just a convenient tool, that doesn't have a serious impact on traditional application design.