We have integration setup that creates purchase orders on the batch server. For example the batch job may run and pick up 5 invoices coming from an external source and attempt to post them.
If 4 are successful and 1 fails, we catch the error using the code below:
errEnumerator = SysInfologEnumerator::newData(infolog.cut());
while (errEnumerator.moveNext())
{
msgStruct = new SysInfologMessageStruct(errEnumerator.currentMessage());
errException = errEnumerator.currentException();
messageBody += msgStruct.message() + "\n";
}
Which works great in catching the error and then we return it into a log. The issue is the entire message will be shown. "Number of vouchers posted to the journal 1." 4 times and then the error message.
After each successful post we do clear the infolog by doing infolog.clear();.
If you debug this code in X++ it does clear it each time and the error will only show the actual error without the previous successful posts. But the batch job running on the batch server for some reason does not clear the infolog after each successful post. After CILs, restarting services etc. nothing seems to work.
Is there another way to clear the infolog on the batch server? thanks
If your goal is to store only the error lines in messageBody and not the 'success' lines, you don't have to clear the Infolog. You only need to add the following check at the beginning of your while cycle:
if (errEnumerator.currentException() == Exception::Info ||
errEnumerator.currentException() == Exception::Warning)
{
continue;
}
Do not mess with the infolog!
This will hide information, warnings and errors that you will need for example for batch problem solving.
So please do not clear() or cut().
Instead copy what you want:
numLine = infologLine();
try
{
// Do something useful
}
catch (Exception::Error)
{
doTheLog(infolog.copy(numLine + 1, infologLine()));
throw error("That did not work!");
}
First store the current infolog number. On error process the relevant infologs.
If the infolog is long consider transferring the numbers rather than call by value the container:
doTheLog(numLine + 1, infologLine());
Then infolog.copyin the method.
Related
I am trying to consume a max of 1000 messages from kafka at a time. (I am doing this because i need to batch insert into MSSQL.) I was under the impression that kafka keeps an internal queue which fetches messages from the brokers and when i use the consumer.consume() method it just checks if there are any messages in the internal queue and returns if it finds something. otherwise it just blocks until the internal queue is updated or until timeout.
I tried to use the solution suggested here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1164#issuecomment-610308425
but when i specify TimeSpan.Zero (or any other timespan up to 1000ms) the consumer never consumes any messages. but if i remove the timeout it does consume messages but then i am unable to exit the loop if there are no more messages left to be read.
I also saw an other question on stackoverflow which suggested to read the offset of the last message sent to kafka and then read messages until i reach that offset and then break from the loop. but currently i only have one consumer and 6 partitions for a topic. I haven't tried it yet but i think managing offsets for each of the partition might make the code messy.
Can someone please tell me what to do?
static List<RealTime> getBatch()
{
var config = new ConsumerConfig
{
BootstrapServers = ConfigurationManager.AppSettings["BootstrapServers"],
GroupId = ConfigurationManager.AppSettings["ConsumerGroupID"],
AutoOffsetReset = AutoOffsetReset.Earliest,
};
List<RealTime> results = new List<RealTime>();
List<string> malformedJson = new List<string>();
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("RealTimeTopic");
int count = 0;
while (count < batchSize)
{
var consumerResult = consumer.Consume(1000);
if (consumerResult?.Message is null)
{
break;
}
Console.WriteLine("read");
try
{
RealTime item = JsonSerializer.Deserialize<RealTime>(consumerResult.Message.Value);
results.Add(item);
count += 1;
}
catch(Exception e)
{
Console.WriteLine("malformed");
malformedJson.Add(consumerResult.Message.Value);
}
}
consumer.Close();
};
Console.WriteLine(malformedJson.Count);
return results;
}
I found a workaround.
For some reason the consumer first needs to be called without a timeout. That means it will wait for a message until it gets at least one. after that using consume with timeout zero fetches all the rest of the messages one by one from the internal queue. this seems to work out for the best.
I had a similar problem, updating the Confluent.Kafka and lidrdkafka libraries from version 1.8.2 to 2.0.2 helped
I'm trying to making a Consumer Kafka using NET CORE 2.1, this consumer should read one message compare timestamp and commit or not, so this consumer can stay on same message until this validation is true. See my code:
while(true)
{
try
{
var cr = consumer.Consume(TimeSpan.FromMilliseconds(4000));
if (cr == null)
{
Console.WriteLine("Exiting ... no messages to process");
break;
}
double totalSeconds = (DateTime.Now - cr.Timestamp.UtcDateTime).TotalSeconds;
Console.WriteLine($"TotalSeconds = {totalSeconds} , Resume = {resumeTimeSeconds}");
if (totalSeconds > resumeTimeSeconds)
{
Console.WriteLine($"Message = {cr.Value}");
consumer.Commit();
}else
{
Console.WriteLine($"Skipping... {cr.Value}");
continue;
}
}
catch (ConsumeException e)
{
Console.WriteLine($"Error occured: {e.Error.Reason}");
}
}
So, I have 10 messages in my topic and LAG is 2. I want to the next message is called only if i Commit() the previous message, but the consumer.Consume() method always call the next message.
The consumer commit comes into play only when your consumer start ( or recover from a crash). Your consumer will internally keep track of the last received offset for each partition.
What you can do is maybe use seek() to get back to the previous offset you just tried to process, and then retry.
Yannick
We are using Flyway 4 (great tool!) on Oracle.
When invalid DDL is committed, the continuous database build breaks.. and all the team gets an email.. all good so far.
But when, code that breaks one of our stored procedures is committed.. ie procedure gets created, but it fails to compile.. we still get a successful migration reported from Flyway.
During the migration we see something like :
DB: Warning: execution completed with warning (SQL State: 99999 - Error Code: 17110)
..but still the Flyway ant task reports success.
As we have a lot of stored procedures, 9 times out of 10 it is these that are broken by developers, and not the DDL. We really would like Flyway to fail on a warning also. Can anyone advise how best to approach this?
Solved! Found an acceptable solution for us and implemented it as follows, utilising Flyways callback mechanism which is documented on the Flyway website.
There are many callbacks available and are invoked at various points, but the one that appears to suit our needs is afterMigrate. In the callback, we can execute sql (on Oracle) which counts the number of invalid objects in the user schema at hand
So, implementing a java afterMigrate callback as follows does the job:
public void afterMigrate(Connection connection) {
String countInvalidObjs = "select count(*) " +
"from user_objects " +
"where object_type in ('FUNCTION','PROCEDURE','PACKAGE','PACKAGE BODY','TRIGGER') " +
"and status = 'INVALID' ";
int invalidObjCount = -1;
Statement statement;
try {
statement = connection.createStatement();
ResultSet rs = statement.executeQuery(countInvalidObjs);
while (rs.next()) {
invalidObjCount = rs.getInt(1);
}
} catch(Throwable t) {
System.out.println("*error* " + t.getMessage());
} finally {
if(invalidObjCount!=0) {
throw new IllegalArgumentException("fail to complete migration, build finished with databse warnings");
}
}
}
I have a window in a Qt application using PostgreSQL 9.3 database. The window is a form used do display, edit and insert new data. t looks like that:
I have data from 3 sql tables in that view. the tables are related with foreign keys:
contractors (main table) - mapped to "personal data" section
contacts (has foreign key to contractors.ID)
addresses (has foreign key to contractors.ID)
So - in my window's class I have 3 main models (+ 2 proxy models to transpose tables in "personal data" an "address data" sections). I use QSqlTableModel for theese sesctions, and a QSqlRelationalTableModel for contactData section. when opening that window "normally" (to view some contractor), i simply pass contractor's ID to the constructor and store it in proper variable. Also, I call the QSqlTableModel::​setFilter(const QString & filter) method for each of the models, and set the proper filtering. When opening that window in "add new" mode i simply pass a "-1" or "0" value to the ID variable, so no data gets loaded to the model.
All 3 models have QSqlTableModel::OnManualSubmit editStrategy. When saving the data (triggered by clicking a proper button), I start a transaction. And then I submit models one-by-one. personalData model gets submitted first, as I need to obtain it's PK after insert (to set in the FK fields in other models).
When submitting of the model fails, I show a messageBox with the QSqlError content, rollback the transaction and return from the method.
When I have an error on the first model being processed - no problem, as nothing was inserted. But when the first model is saved, but the second or third fails - there is a little problem. So I rollback the transacion as before, and return from the function. But after correcting the data and submitting it again - the first model is not trying to submit - as it doesn't know that there was a rollback, and the data needs to be inserted again. What would be a good way to notice such a model, that it needs to be submited once again?
At the moment I ended up with something like that:
void kontrahenciSubWin::on_btnContractorAdd_clicked() {
//QStringList errorList; // when error occurs in one model - whole transacion gets broken, so no need for a list
QString error;
QSqlDatabase db = QSqlDatabase::database();
//backup the data - in case something fails and we have to rollback the transaction
QSqlRecord personalDataModelrec = personalDataModel->record(0); // always one row. will get erased by SubmitAll, as no filter is set, because I don't have its ID.
QList<QSqlRecord> contactDataModelRecList;
for (int i = 0 ; i< contactDataModel->rowCount(); i++) {
contactDataModelRecList.append( contactDataModel->record(i) );
}
QList<QSqlRecord> addressDataModelRecList;
for (int i = 0 ; i< addressDataModel->rowCount(); i++) {
addressDataModelRecList.append( addressDataModel->record(i) );
}
db.transaction();
if ( personalDataModel->isDirty() && error.isEmpty() ) {
if (!personalDataModel->submitAll()) //submitAll calls select() on the model, which destroys the data as the filter is invalid ("where ID = -1")
//errorList.append( personalDataModel->lastError().databaseText() );
error = personalDataModel->lastError().databaseText();
else {
kontrahentid = personalDataModel->query().lastInsertId().toInt(); //only here can I fetch ID
setFilter(ALL); //and pass it to the models
}
}
if ( contactDataModel->isDirty() && error.isEmpty() )
if (!contactDataModel->submitAll()) //slot on_contactDataModel_beforeInsert() sets FK field
//errorList.append( contactDataModel->lastError().databaseText() );
error = contactDataModel->lastError().databaseText();
if ( addressDataModel->isDirty() && error.isEmpty() )
if (!addressDataModel->submitAll()) //slot on_addressDataModel_beforeInsert() sets FK field
//errorList.append( addressDataModel->lastError().databaseText() );
error = addressDataModel->lastError().databaseText();
//if (!errorList.isEmpty()) {
// QMessageBox::critical(this, tr("Data was not saved!"), tr("The following errors occured:") + " \n" + errorList.join("\n"));
if (!error.isEmpty()) {
QMessageBox::critical(this, tr("Data was not saved!"), tr("The following errors occured:") + " \n" + error);
db.rollback();
personalDataModel->clear();
contactDataModel->clear();
addressDataModel->clear();
initModel(ALL); //re-init models: set table and so on.
//re-add data to the models - backup comes handy
personalDataModel->insertRecord(-1, personalDataModelrec);
for (QList<QSqlRecord>::iterator it = contactDataModelRecList.begin(); it != contactDataModelRecList.end(); it++) {
contactDataModel->insertRecord(-1, *it);
}
for (QList<QSqlRecord>::iterator it = addressDataModelRecList.begin(); it != addressDataModelRecList.end(); it++) {
addressDataModel->insertRecord(-1, *it);
}
return;
}
db.commit();
isInEditMode = false;
handleGUIOnEditModeChange();
}
Does anyone have a better idea? I doubt if it's possible to ommit backing-up the records before trying to insert them. But maybe there is a better way to "re-add" them to the model? I tried to use "setRecord", and "remoweRows" & "insertRecord" combo, but no luck. Resetting the whole model seems easiest (I only need to re-init it, as it loses table, filter, sorting and everything else when cleared)
I suggest you to use a function written in the language PLPGSQL. It has one transaction between BEGIN and END. If it goes wrong at a certain point of the code then will it rollback all data flawlessly.
What you are doing now is not a good design, because you handle the control over a certain functionality (rollback) to an external system with regard to the rollback (it is happening in the database). The external system is not designed to do that, while the database on the contrairy is created and designed for dealing with rollbacks and transactions. It is very good at it. Rebuilding and reinventing this functionality, which is quite complex, outside the database is asking for a lot of trouble. You will never get the same flawless rollback handling as you will have using functions within the database.
Let each system do what it can do best.
I have met your problem before and had the same line of thought to work this problem out using Hibernate in my case. Until I stepped back from my efforts and re-evaluated the situation.
There are three teams working on the rollback mechanism of a database:
1. the men and women who are writing the source code of the database itself,
2. the men and women who are writing the Hibernate code, and
3. me.
The first team is dedicated to the creation of a good rollback mechanism. If they fail, they have a bad product. They succeeded. The second team is dedicated to the creation of a good rollback mechanism. Their product is not failing when it is not working in very complex situations.
The last team, me, is not dedicated to this problem. Who am I to write a better solution then the people of team 2 or team 1 based on the work of team 2 who were not able to get it to the level of team 1?
That is when I decided to use database functions instead.
I recently started this question in another thread (to which Reed Copsey
graciously responded) but I don't feel I framed the question well.
At the core of my question, I would like an illustration of how to gain
access to data AS it is being get/set.
I have Page.aspx.cs and, in the codebehind, I have a loop:
List<ServerVariable> files = new List<ServerVariable>();
for (i = 0; i <= Request.Files.Count - 1; i++)
{
m_objFile = Request.Files[i];
m_strFileName = m_objFile.FileName;
m_strFileName = Path.GetFileName(m_strFileName);
files.Add(new ServerVariable(i.ToString(),
this.m_strFileName, "0"));
}
//CODE TO COPY A FILE FOR UPLOAD TO THE
//WEB SERVER
//WHEN THE UPLOAD IS DONE, SET THE ITEM TO
//COMPLETED
int index = files.FindIndex(p => p.Completed == "0");
files[index] = new ServerVariable(i.ToString(),
this.m_strFileName, "1");
The "ServerVariable" type gets and sets ID, File, and Completed.
Now, I need to show the user the file upload "progress" (in effect,
the time between when the loop adds the ServerVariable item to the
list to when the Completed status changes from 0 to 1.
Now, I have a web service method "GetStatus()" that I would like to
use to return the files list (created above) as a JSON string (via
JQuery). Files with a completed status of 0 are still in progress,
files with a 1 are done.
MY QUESTION IS - what does the code inside GetStatus() look like? How
do I query List **as* it is being populated and
return the results real-time? I have been advised that I need to lock
the working process (setting the ServerVariable data) while I query
the values returned in GetStatus() and then unlock that same process?
If I have explained myself well, I'd appreciate a code illustration of
the logic in GetStatus().
Thanks for reading.
Have a look at this link about multi threading locks.
You need to lock the object in both read and write.