I have a requirement for the generation of a full customer by item price list to be exported to a 3rd party program, taking into account trade agreements / discounts etc.
To get the list of customers by items I have tried:
creating a double loop (eg outer loop of all customers, inner loop with all items)
creating a select statement that joins the custTable to inventTable
To generate a price I have been creating a fake SalesLine for the given customer / item and executing salesLine.calcLineAmount(1) in x++
However this takes ~6 hours to process the full customer / item list.
The only other thing I have though of is to run this process once and store in a table, then any time a price / trade agreement / discount is changed, only update the relevant records.
Does anyone have any other suggestions of a better way to achieve this result?
How did you join CustTable to InventTable? That's just a Cartesian join I believe, which is all possible customers against all possible items, which is inherently slow anyway.
And the nature of pricing/trade agreements/discounts is that they change frequently and it's not really practical to keep a running table of what every price for every customer is, especially if you have trade agreements for date ranges, specific quantity thresholds or units, warehouse it came out of, etc.
Here is code to do what you want and I believe it'd work faster than whatever you're doing. I quickly wrote this, but it should work and you should remove the integer breaks, I just didn't want it to run forever.
static void Job66(Args _args)
{
PriceDisc priceDisc;
container retVal;
CustTable custTable;
InventTable inventTable;
InventTableModule inventTableModule;
int i, n;
;
while select custTable
{
i++;
if (i>5)
break;
n = 0;
while select inventTable
join inventTableModule
where inventTableModule.ItemId == inventTable.ItemId &&
inventTableModule.ModuleType == ModuleInventPurchSales::Sales
{
n++;
if (n>10)
break;
retVal = priceDisc::findItemPriceAgreement(ModuleInventPurchSales::Sales,
inventTable.ItemId,
InventDim::findOrCreateBlank(false),
inventTableModule.UnitId,
SystemDateGet(),
1,
custTable.AccountNum,
custTable.Currency,
custTable.PriceGroup);
info(strfmt("%1 - %2 - %3 - %4 - %5 - %6 - %7 - %8 - %9", custTable.AccountNum,
custTable.Name,
inventTable.ItemId,
inventTable.ItemName,
conPeek(retVal, 1), // priceDisc.price(),
conPeek(retVal, 2), // priceDisc.markup(),
conPeek(retVal, 3), // priceDisc.priceUnit(),
conPeek(retVal, 4), // priceDisc.deliveryDays(),
conPeek(retVal, 5))); // priceDisc.calendarDays()];
}
}
}
Related
How to close a purchase order after updating packingSlip with code?
I know that i can have different quantity in purchLine.receivedNow for every purchLine and i need to post a packing slip and close the purchase order no matter how many items are delivered.
I am trying to post a packingSlip via x++ in AX 2009 and it works fine.
However, I need to close the purchase order at the same time.
I basically need:
purchParmline.closed = true;
any ideas on how to implement this? I have searched and found a lot of different ways to post purchase orders but nothing that quite answers my question.
void postPackingSlip(purchId _purchId, num _packingSlipId)
{
PurchFormLetter PurchFormLetter;
PurchTable PurchTable;
;
PurchTable = PurchTable::find(_purchId,true);
purchFormLetter = purchFormLetter::construct(DocumentStatus::PackingSlip);
PurchFormLetter.update(PurchTable, _packingSlipId , today(), PurchUpdate::ReceiveNow ,AccountOrder::None,NoYes::No,NoYes::No);
}
I also tried to do it this way but with no success
void postPackingSlipOld(purchId _purchId, num _packingSlipId)
{
PurchFormLetter purchFormLetter;
PurchTable purchTable;
purchparmtable purchParmtable;
ParmId parmId;
PurchLine purchLine;
purchparmline purchparmline;
;
purchTable=PurchTable::find(_purchId);
purchFormLetter = PurchFormLetter::construct(DocumentStatus::PackingSlip);
purchFormLetter.createParmUpdate();
purchParmtable = purchParmtable::find(_purchid, _packingSlipId);
purchFormLetter.createParmTable(purchParmTable,purchTable);
purchParmTable.Num = _packingSlipId;
purchParmTable.insert();
while select purchLine
where purchLine.PurchId == purchTable.purchId
{
purchParmLine.ParmId = purchParmTable.ParmId;
purchParmLine.InitFromPurchLine(purchLine);
purchParmLine.ReceiveNow = PurchLine.PurchReceivedNow;
purchParmLine.TableRefId = purchParmTable.TableRefId ;
purchParmLine.closed = true;
purchParmLine.setQty(DocumentStatus::PackingSlip,false, true);
purchParmLine.setLineAmount();
purchParmLine.insert();
}
purchFormLetter.proforma (false);
purchFormLetter.specQty (PurchUpdate::ReceiveNow);
purchFormLetter.transDate (today());
purchFormLetter.run();
}
I'm a little grey on what you're asking, but if it's for AX09 and you want to "close" the PO. I think that's the same as going to the PO Line and doing Functions>Deliver Remainder and choosing Cancel Quantity.
If that's what you're wanting, I believe the logic in the Cancel Quantity is just:
PurchLine.RemainPurchPhysical = 0;
PurchLine.RemainInventPhysical = 0;
PurchLine.update();
And the update() takes care of changing the statuses.
I'm trying to get the row count of a QSqlQuery, the database driver is qsqlite
bool Database::runSQL(QSqlQueryModel *model, const QString & q)
{
Q_ASSERT (model);
model->setQuery(QSqlQuery(q, my_db));
rowCount = model->query().size();
return my_db.lastError().isValid();
}
The query here is a select query, but I still get -1;
If I use model->rowCount() I get only ones that got displayed, e.g 256, but select count(*) returns 120k results.
What's wrong about it?
This row count code extract works for SQLite3 based tables as well as handles the "fetchMore" issue associated with certain SQLite versions.
QSqlQuery query( m_database );
query.prepare( QString( "SELECT * FROM MyDatabaseTable WHERE SampleNumber = ?;"));
query.addBindValue( _sample_number );
bool table_ok = query.exec();
if ( !table_ok )
{
DATABASETHREAD_REPORT_ERROR( "Error from MyDataBaseTable", query.lastError() );
}
else
{
// only way to get a row count, size function does not work for SQLite3
query.last();
int row_count = query.at() + 1;
qDebug() << "getNoteCounts = " << row_count;
}
The documentation says:
Returns ... -1 if the size cannot be determined or if the database does not support reporting information about query sizes.
SQLite indeed does not support this.
Please note that caching 120k records is not very efficient (nobody will look at all those); you should somehow filter them to get the result down to a manageable size.
I want to know that,
If we have LIST object created at server side which contains large amount of data entries like employess master data(10,000), & I want to give search option to search valid employee ID or name.
So I have tried to compare that entered text with that list of large entries in loop, which is obvious degrading performance.
So is there any option to better performace?
Thanks in advance.
Try this:
public List<Employee> SearchEmployee(string search, int pageNo, int pageLength)
{
MasterDataContext db = new MasterDataContext();
var searchResult = (from e in db.Employess
where (search == null ||
e.Name.ToLower().Contains(search.ToLower()))
select e).ToList();
int pageStart = (pageNo - 1) * pageLength;
var pageResult = from c in searchResult.Skip(pageStart).Take(pageLength)
orderby c.CardNo
select c;
return pageResult;
}
I hope it helps.
I have a method for a display field which does the following;
return InventSum::find(_salesLine.ItemId, InventDim::_salesLine.InventDimId).AvailPhysical();
This gives me the on-hand Available Physical for the line site/warehouse/location.
I need to see the total available for just the site/warehouse. I think I need to search inventDim by Item/Warehouse to get my inventdimid, but I cannot find the method so I am suspicious that this is incorrect.
Can anyone help?
My working solution...
InventDimParm invDimParm;
InventDim warehouseInvDim;
InventDim salesLineInventDim;
;
salesLineInventDim = _salesLine.inventDim();
warehouseInvDim.InventSiteId = salesLineInventDim.InventSiteId;
warehouseInvDim.InventLocationId = salesLineInventDim.InventLocationId;
warehouseInvDim = InventDim::findOrCreate(warehouseInvDim);
invDimParm.initFromInventDim(InventDim::find(warehouseInvDim.inventDimId));
return InventSum::findSum(_salesLine.ItemId,warehouseInvDim,invDimParm).availOrdered();
I know this is for availOrdered() but it works exactly the same for availPhysical()
You should use the InventOnhand class.
It sums the invent on-hand values based on criteria like item id and inventory dimensions.
There are lots of uses in AX, search the Class node.
The following job finds all Sales Lines with the Open Order status, which have an Available Physical quantity on hand matching all dimensions specified on the Sales Lines except location:
static void FindOpenSalesLineAvailPhys(Args _args)
{
SalesLine salesline;
InventDim inventDim;
InventDimParm inventDimParm;
InventOnHand inventOnHand;
;
while select salesLine where salesLine.SalesStatus == SalesStatus::Backorder
{
inventDim = salesLine.inventDim();
inventDimParm.initFromInventDim(inventDim);
inventDimParm.WMSLocationIdFlag = NoYes::No;
inventOnHand = InventOnHand::newItemDim(salesLine.ItemId, inventDim, inventDimParm);
if (inventOnHand.availPhysical())
{
info(strfmt("Sales Order %1 Line %2 Item Id %3 Available Physical (ignoring Location) %4",
salesLine.salesId, salesLine.LineNum, salesLine.ItemId, inventOnHand.availPhysical()));
}
}
}
You basically set your inventDim values the way you want to search for them, and then do an InventDim::FindOrCreate to see if either the inventory dimension already exists, or it needs to be created and a new number sequence will be consumed. This is used so that the InventDim table doesn't store every single possible combination of dimensions. Also because if you have any serialized products, it's not feasible for the table to store all of the combinations, so it only stores the ones it needs.
InventDim inventDim;
SalesLine _salesLine;
;
inventDim.InventSiteId = 'mySite';
inventDim.InventLocationId = 'myWarehouse';
inventDim = InventDim::findOrCreate(inventDim);
return InventSum::find(_salesLine.ItemId, inventDim.inventDimId).AvailPhysical();
log file1 contains records of customers(name,id,date) who visited yesterday
log file2 contains records of customers(name,id,date) who visited today
How would you display customers who visited yesterday but not today?
Constraint is: Don't use auxiliary data structure because file contains millions of records. [So, no hashes]
Is there a way to do this using Unix commands ??
an example, but check the man page of comm for the option you want.
comm -2 <(sort -u yesterday) <(sort -u today)
The other tool you can use is diff
diff <(sort -u yesterday) <(sort -u today)
I was personally going for the creating a data structure and records of visits, but, I can see how you'd do it another way too.
In pseudocode, that looks something like python but could be re-written in perl or shell script or ...
import subprocess
import os
for line in fileinput.input(['myfile'])::
# split out data. For the sake of it I'm assuming name\tid\tdate
fields = line.split("\")
id = fields[1]
grepresult = subprocess.Popen("grep \"" + id + "\" file1", shell=True, bufsize=bufsize, stdout=PIPE).stdout
if len(grepresult) == 0:
print fields # it wasn't in field1
That's not perfect, not tested so treat appropriately but it gives you the gist of how you'd use unix commands. That said, as sfussenegger points out C/C++ if that's what you're using should be able to handle pretty large files.
Disclaimer: this is a not so neat solution (repeatedly calling grep) to match the requirements of the question. If I was doing it, I would use C.
Is a customer identified by id? Is it an int or long? If the answer to both questions is yes, an array with 10,000,000 integers shouldn't take more than 10M*4 = 40MB memory - not a big deal on decent hardware. Simply sort and compare them.
btw, sorting an array with 10M random ints takes less than 2 seconds on my machine - again, nothing to be afraid of.
Here's some very simple Java code:
public static void main(final String args[]) throws Exception {
// elements in each log file
int count = 10000000;
// "read" our log file
Random r = new Random();
int[] a1 = new int[count];
int[] a2 = new int[count];
for (int i = 0; i < count; i++) {
a1[i] = Math.abs(r.nextInt());
a2[i] = Math.abs(r.nextInt());
}
// start timer
long start = System.currentTimeMillis();
// sort logs
Arrays.sort(a1);
Arrays.sort(a2);
// counters for each array
int i1 = 0, i2 = 0, i3 = 0;
// initial values
int n1 = a1[0], n2 = a2[0];
// result array
int[] a3 = new int[count];
try {
while (true) {
if (n1 == n2) {
// we found a match, save value if unique and increment counters
if (i3 == 0 || a3[i3-1] != n1) a3[i3++] = n1;
n1 = a1[i1++];
n2 = a2[i2++];
} else if (n1 < n2) {
// n1 is lower, increment counter (next value is higher)
n1 = a1[i1++];
} else {
// n2 is lower, increment counter (next value is higher)
n2 = a2[i2++];
}
}
} catch (ArrayIndexOutOfBoundsException e) {
// don't try this at home - it's not the pretties way to leave the loop!
}
// we found our results
System.out.println(i3 + " commont clients");
System.out.println((System.currentTimeMillis() - start) + "ms");
}
result
// sample output on my machine:
46308 commont clients
3643ms
as you see, quite efficient for 10M records in each log