I have a "Upload Record" PXAction to load records to grid and release these records - grid

I have a custom PXbutton called UploadRecords, when I click this button I should populate the grid with records and release the records.
Release Action is pressed in the UploadRecords action delegate. The problem I get with this code is, the code here function properly for less records by release action but when passes thousands of records to release, it takes huge time(> 30 min.) and show the error like Execution timeout.
suggest me to avoid more execution time and release the records fastly.
namespace PX.Objects.AR
{
public class ARPriceWorksheetMaint_Extension : PXGraphExtension<ARPriceWorksheetMaint>
{
//public class string_R112 : Constant<string>
//{
// public string_R112()
// : base("4E5CCAFC-0957-4DB3-A4DA-2A24EA700047")
// {
// }
//}
public class string_R112 : Constant<string>
{
public string_R112()
: base("EA")
{
}
}
public PXSelectJoin<InventoryItem, InnerJoin<CSAnswers, On<InventoryItem.noteID, Equal<CSAnswers.refNoteID>>,
LeftJoin<INItemCost, On<InventoryItem.inventoryID, Equal<INItemCost.inventoryID>>>>,
Where<InventoryItem.salesUnit, Equal<string_R112>>> records;
public PXAction<ARPriceWorksheet> uploadRecord;
[PXUIField(DisplayName = "Upload Records", MapEnableRights = PXCacheRights.Select, MapViewRights = PXCacheRights.Select)]
[PXButton]
public IEnumerable UploadRecord(PXAdapter adapter)
{
using (PXTransactionScope ts = new PXTransactionScope())
{
foreach (PXResult<InventoryItem, CSAnswers, INItemCost> res in records.Select())
{
InventoryItem invItem = (InventoryItem)res;
INItemCost itemCost = (INItemCost)res;
CSAnswers csAnswer = (CSAnswers)res;
ARPriceWorksheetDetail gridDetail = new ARPriceWorksheetDetail();
gridDetail.PriceType = PriceTypeList.CustomerPriceClass;
gridDetail.PriceCode = csAnswer.AttributeID;
gridDetail.AlternateID = "";
gridDetail.InventoryID = invItem.InventoryID;
gridDetail.Description = invItem.Descr;
gridDetail.UOM = "EA";
gridDetail.SiteID = 6;
InventoryItemExt invExt = PXCache<InventoryItem>.GetExtension<InventoryItemExt>(invItem);
decimal y;
if (decimal.TryParse(csAnswer.Value, out y))
{
y = decimal.Parse(csAnswer.Value);
}
else
y = decimal.Parse(csAnswer.Value.Replace(" ", ""));
gridDetail.CurrentPrice = y; //(invExt.UsrMarketCost ?? 0m) * (Math.Round(y / 100, 2));
gridDetail.PendingPrice = y; // (invExt.UsrMarketCost ?? 0m)* (Math.Round( y/ 100, 2));
gridDetail.TaxID = null;
Base.Details.Update(gridDetail);
}
ts.Complete();
}
Base.Document.Current.Hold = false;
using (PXTransactionScope ts = new PXTransactionScope())
{
Base.Release.Press();
ts.Complete();
}
List<ARPriceWorksheet> lst = new List<ARPriceWorksheet>
{
Base.Document.Current
};
return lst;
}
protected void ARPriceWorksheet_RowSelected(PXCache cache, PXRowSelectedEventArgs e, PXRowSelected InvokeBaseHandler)
{
if (InvokeBaseHandler != null)
InvokeBaseHandler(cache, e);
var row = (ARPriceWorksheet)e.Row;
uploadRecord.SetEnabled(row.Status != SPWorksheetStatus.Released);
}
}
}

First, Do you need them all to be in a single transaction scope? This would revert all changes if there is an exception in any. If you need to have them all committed without any errors rather than each record, you would have to perform the updates this way.
I would suggest though moving your process to a custom processing screen. This way you can load the records, select one or many, and use the processing engine built into Acumatica to handle the process, rather than a single button click action. Here is an example: https://www.acumatica.com/blog/creating-custom-processing-screens-in-acumatica/
Based on the feedback that it must be all in a single transaction scope and thousands of records, I can only see two optimizations that may assist. First is increasing the Timeout as explained in this blog post. https://acumaticaclouderp.blogspot.com/2017/12/acumatica-snapshots-uploading-and.html
Next I would load all records into memory first and then loop through them with a ToList(). That might save you time as it should pull all records at once rather than once for each record.
going from
foreach (PXResult<InventoryItem, CSAnswers, INItemCost> res in records.Select())
to
var recordList = records.Select().ToList();
foreach (PXResult<InventoryItem, CSAnswers, INItemCost> res in recordList)

Related

Razor, ASP.NET Core, Entity Framework : updating some fields

I'm trying to update two entities at the same time but the change is not applying and I think that when I try to return the update entity it doesn't even found it.
Here is my Razor view:
public IActionResult OnPost()
{
if (!ModelState.IsValid)
{
return Page();
}
repositorioFamiliar.Actualizar(Familiar);
return RedirectToPage("/Familiares/DetalleFamiliar", new { IdPaciente = Familiar.IdPaciente });
}
Here is my update function:
public FamiliaresPer Actualizar(FamiliaresPer familiar)
{
var familiarActualizar = (from f in _context.Familiars.Where(p => p.IdFamiliar == familiar.IdFamiliar) select f).FirstOrDefault();
if (familiarActualizar != null)
{
familiarActualizar.Correo = familiar.Correo;
_context.Update(familiarActualizar);
_context.SaveChanges();
}
var personaActualizar = (from p in _context.Personas.Where(p => p.Id == familiar.IdPersona) select p).FirstOrDefault();
if (personaActualizar != null)
{
personaActualizar.Telefono = familiar.Telefono;
_context.Update(personaActualizar);
_context.SaveChanges();
}
var familiares = from p in _context.Familiars
from p1 in _context.Personas
where p.IdPaciente == familiar.IdPaciente
where p.IdPersona == p1.IdPersona
select new FamiliaresPer()
{
IdFamiliar = p.IdFamiliar,
IdPaciente = p.IdPaciente,
IdPersona = p1.IdPersona,
Id = p1.Id,
Nombres = p1.Nombres,
Apellidos = p1.Apellidos,
Genero = p1.Genero,
Telefono = p1.Telefono,
Parentesco = p.Parentesco,
Correo = p.Correo,
};
FamiliaresPer familiaresPer = familiares.FirstOrDefault();
return familiaresPer;
}
When I submit the form I get an error
NullReferenceException: Object reference not set to an instance of an object
And the link shows the IdPaciente = 0 when it should use the same IdPaciente of the updated entity (which the Id never changes).
In your OnPost( ) Action Method, you used repositorioFamiliar.Actualizar(Familiar);
but it looks like you didn't define 'Familiar'.
In addition, when I look at your code. I can give you an advice. Let's say your first update was done correctly and you got an error in the second update case. But you want both to be updated at the same time. Assume that the first object is updated in the database but the second one isn't. This is a problem, right? Unit of Work design pattern is very useful to solve this.
In brief, The approach should be to do SaveChanges() after both update processes are completed so there will be no changes in the database until both updates are completed.

Application Cache and Slow Process

I want to create an application wide feed on my ASP.net 3.5 web site using the application cache. The data that I am using to populate the cache is slow to obtain, maybe up to 10 seconds (from a remote server's data feed). My question/confusion is, what is the best way to structure the cache management.
private const string CacheKey = "MyCachedString";
private static string lockString = "";
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
if (data == null)
{
// A - Should this method call go here?
newData = SlowResourceMethod();
lock (lockString)
{
data = (string)Cache[CacheKey];
if (data != null)
{
return data;
}
// B - Or here, within the lock?
newData = SlowResourceMethod();
Cache[CacheKey] = data = newData;
}
}
return data;
}
The actual method would be presented by and HttpHandler (.ashx).
If I collect the data at point 'A', I keep the lock time short, but might end up calling the external resource many times (from web pages all trying to reference the feed). If I put it at point 'B', the lock time will be long, which I am assuming is a bad thing.
What is the best approach, or is there a better pattern that I could use?
Any advice would be appreciated.
I add the comments on the code.
private const string CacheKey = "MyCachedString";
private static readonly object syncLock = new object();
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
// start to check if you have it on cache
if (data == null)
{
// A - Should this method call go here?
// absolut not here
// newData = SlowResourceMethod();
// we are now here and wait for someone else to make it or not
lock (syncLock)
{
// now lets see if some one else make it...
data = (string)Cache[CacheKey];
// we have it, send it
if (data != null)
{
return data;
}
// not have it, now is the time to look for it.
// B - Or here, within the lock?
newData = SlowResourceMethod();
// set it on cache
Cache[CacheKey] = data = newData;
}
}
return data;
}
Better for me is to use mutex and lock depended on the name CacheKey and not lock all resource and the non relative one. With mutex one basic simple example will be:
private const string CacheKey = "MyCachedString";
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
// start to check if you have it on cache
if (data == null)
{
// lock it base on resource key
// (note that not all chars are valid for name)
var mut = new Mutex(true, CacheKey);
try
{
// Wait until it is safe to enter.
// but also add 30 seconds max
mut.WaitOne(30000);
// now lets see if some one else make it...
data = (string)Cache[CacheKey];
// we have it, send it
if (data != null)
{
return data;
}
// not have it, now is the time to look for it.
// B - Or here, within the lock?
newData = SlowResourceMethod();
// set it on cache
Cache[CacheKey] = data = newData;
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}
return data;
}
You can also read
Image caching issue by using files in ASP.NET

DevExpress XtraScheduler hanging on adding appointment to Storage

I have a XtraScheduler SchedulerControl configured as the following:
private DevExpress.XtraScheduler.SchedulerControl _SchedulerControl;
public DevExpress.XtraScheduler.SchedulerControl ConvSchedulerControl
{
get
{
if (_SchedulerControl == null)
{
_SchedulerControl = new DevExpress.XtraScheduler.SchedulerControl();
_SchedulerControl.Storage = new SchedulerStorage();
_SchedulerControl.Storage.Appointments.Mappings.Subject = "StandingOrderIDString";
_SchedulerControl.Storage.Appointments.Mappings.Start = "ScheduledDate";
_SchedulerControl.Storage.Appointments.Mappings.RecurrenceInfo = "RecurrenceInfo";
_SchedulerControl.Storage.Appointments.Mappings.Type = "Type";
_SchedulerControl.Storage.Appointments.CustomFieldMappings.Add(new DevExpress.XtraScheduler.AppointmentCustomFieldMapping("Inactive", "Inactive"));
_SchedulerControl.Storage.Appointments.CustomFieldMappings.Add(new DevExpress.XtraScheduler.AppointmentCustomFieldMapping("StandingOrderKEY", "StandingOrderKEY"));
BindingSource bs = new BindingSource();
bs.DataSource = new List<StandingOrder>();
_SchedulerControl.Storage.Appointments.DataSource = bs;
}
return _SchedulerControl;
}
}
and I am attempting to programmatically add an appointment with recurrence information, as in the examples given at http://help.devexpress.com/#WindowsForms/CustomDocument6201 . However, when the method execution reaches its final line (indicated) that adds the created appointment to the storage, it "hangs." No exception is ever thrown; I have left it running for upwards of 15 minutes with no change:
public void SetRecurrence(DateTime startDate, DateTime? endDate)
{
Appointment appointmentObj = ConvSchedulerControl.Storage.CreateAppointment(AppointmentType.Pattern);
if (endDate != null &&
endDate != DateTime.Parse("12/31/2999"))
{
appointmentObj.End = (DateTime)endDate;
}
else
{
appointmentObj.RecurrenceInfo.Range = RecurrenceRange.NoEndDate;
}
appointmentObj.Start = startDate;
appointmentObj.RecurrenceInfo.Type = RecurrenceType.Weekly;
appointmentObj.RecurrenceInfo.WeekDays = WeekDays.Monday;
appointmentObj.AllDay = true;
//Program execution reaches this line, but never proceeds past it.
ConvSchedulerControl.Storage.Appointments.Add(appointmentObj);
}
I would imagine that there's something wrong with the configuration that's preventing the storage from being able to successfully add the appointment, but I've been unable to turn up any other information on the subject. Does anyone know why this method isn't appropriate for adding an appointment to the storage, and how it can be corrected?
You've failed to provide a mapping for the 'End' field. This is a required mapping. Honestly, I only know this from having created a calendar in the designer. When you place a SchedulerControl onto a form/control, one of the things the designer gives you is a "Mappings Wizard". The 'Start' and 'End' fields are marked in the wizard as being required.

X++ passing current selected records in a form for your report

I am trying to make this question sound as clear as possible.
Basically, I have created a report, and it now exists as a menuitem button so that the report can run off the form.
What I would like to do, is be able to multi-select records, then when I click on my button to run my report, the current selected records are passed into the dialog form (filter screen) that appears.
I have tried to do this using the same methods as with the SaleLinesEdit form, but had no success.
If anyone could point me in the right direction I would greatly appreciate it.
Take a look at Axaptapedia passing values between forms. This should help you. You will probably have to modify your report to use a form for the dialog rather than using the base dialog methods of the report Here is a good place to start with that!
Just wanted to add this
You can use the MuliSelectionHelper class to do this very simply:
MultiSelectionHelper selection = MultiSelectionHelper::createFromCaller(_args.caller());
MyTable myTable = selection.getFirst();
while (myTable)
{
//do something
myTable = selection.getNext();
}
Here is the resolution I used for this issue;
Two methods on the report so that when fields are multi-selected on forms, the values are passed to the filter dialog;
private void setQueryRange(Common _common)
{
FormDataSource fds;
LogisticsControlTable logisticsTable;
QueryBuildDataSource qbdsLogisticsTable;
QueryBuildRange qbrLogisticsId;
str rangeLogId;
set logIdSet = new Set(Types::String);
str addRange(str _range, str _value, QueryBuildDataSource _qbds, int _fieldNum, Set _set = null)
{
str ret = _range;
QueryBuildRange qbr;
;
if(_set && _set.in(_Value))
{
return ret;
}
if(strLen(ret) + strLen(_value) + 1 > 255)
{
qbr = _qbds.addRange(_fieldNum);
qbr.value(ret);
ret = '';
}
if(ret)
{
ret += ',';
}
if(_set)
{
_set.add(_value);
}
ret += _value;
return ret;
}
;
switch(_common.TableId)
{
case tableNum(LogisticsControlTable):
qbdsLogisticsTable = element.query().dataSourceTable(tableNum(LogisticsControlTable));
qbrLogisticsId = qbdsLogisticsTable.addRange(fieldNum(LogisticsControlTable, LogisticsId));
fds = _common.dataSource();
for(logisticsTable = fds.getFirst(true) ? fds.getFirst(true) : _common;
logisticsTable;
logisticsTable = fds.getNext())
{
rangeLogId = addrange(rangeLogId, logisticsTable.LogisticsId, qbdsLogisticsTable, fieldNum(LogisticsControlTable, LogisticsId),logIdSet);
}
qbrLogisticsId.value(rangeLogId);
break;
}
}
// This set the query and gets the values passing them to the range i.e. "SO0001, SO0002, SO000£...
The second methods is as follows;
private void setQueryEnableDS()
{
Query queryLocal = element.query();
;
}
Also on the init method this is required;
public void init()
{
;
super();
if(element.args() && element.args().dataset())
{
this.setQueryRange(element.args().record());
}
}
Hope this helps in the future for anyone else who has the issue I had.

Raven DB DocumentStore - throws out of memory exception

I have code like this:
public bool Set(IEnumerable<WhiteForest.Common.Entities.Projections.RequestProjection> requests)
{
var documentSession = _documentStore.OpenSession();
//{
try
{
foreach (var request in requests)
{
documentSession.Store(request);
}
//requests.AsParallel().ForAll(x => documentSession.Store(x));
documentSession.SaveChanges();
documentSession.Dispose();
return true;
}
catch (Exception e)
{
_log.LogDebug("Exception in RavenRequstRepository - Set. Exception is [{0}]", e.ToString());
return false;
}
//}
}
This code gets called many times. After i get to around 50,000 documents that have passed through it i get an OutOfMemoryException.
Any idea why ? perhaps after a while i need to declare a new DocumentStore ?
thank you
**
UPDATE:
**
I ended up using the Batch/Patch API to perform the update I needed.
You can see the discussion here: https://groups.google.com/d/topic/ravendb/3wRT9c8Y-YE/discussion
Basically since i only needed to update 1 property on my objects, and after considering ayendes comments about re-serializing all the objects back to JSON, i did something like this:
internal void Patch()
{
List<string> docIds = new List<string>() { "596548a7-61ef-4465-95bc-b651079f4888", "cbbca8d5-be45-4e0d-91cf-f4129e13e65e" };
using (var session = _documentStore.OpenSession())
{
session.Advanced.DatabaseCommands.Batch(GenerateCommands(docIds));
}
}
private List<ICommandData> GenerateCommands(List<string> docIds )
{
List<ICommandData> retList = new List<ICommandData>();
foreach (var item in docIds)
{
retList.Add(new PatchCommandData()
{
Key = item,
Patches = new[] { new Raven.Abstractions.Data.PatchRequest () {
Name = "Processed",
Type = Raven.Abstractions.Data.PatchCommandType.Set,
Value = new RavenJValue(true)
}}});
}
return retList;
}
Hope this helps ...
Thanks alot.
I just did this for my current project. I chunked the data into pieces and saved each chunk in a new session. This may work for you, too.
Note, this example shows chunking by 1024 documents at a time, but needing at least 2000 before we decide it's worth chunking. So far, my inserts got the best performance with a chunk size of 4096. I think that's because my documents are relatively small.
internal static void WriteObjectList<T>(List<T> objectList)
{
int numberOfObjectsThatWarrantChunking = 2000; // Don't bother chunking unless we have at least this many objects.
if (objectList.Count < numberOfObjectsThatWarrantChunking)
{
// Just write them all at once.
using (IDocumentSession ravenSession = GetRavenSession())
{
objectList.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
return;
}
int numberOfDocumentsPerSession = 1024; // Chunk size
List<List<T>> objectListInChunks = new List<List<T>>();
for (int i = 0; i < objectList.Count; i += numberOfDocumentsPerSession)
{
objectListInChunks.Add(objectList.Skip(i).Take(numberOfDocumentsPerSession).ToList());
}
Parallel.ForEach(objectListInChunks, listOfObjects =>
{
using (IDocumentSession ravenSession = GetRavenSession())
{
listOfObjects.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
});
}
private static IDocumentSession GetRavenSession()
{
return _ravenDatabase.OpenSession();
}
Are you trying to save it all in one call?
The DocumentSession need to turn all of the objects that you pass it into a single request to the server. That means that it may allocate a lot of memory for the write to the server.
Usually we recommend on batches of about 1,024 items in you are doing bulks saves.
DocumentStore is a disposable class, so I worked around this problem by disposing the instance after each chunk. I highly doubt this is the most efficient way to run operations, but it will prevent significant memory overhead from happening.
I was running a sort of "delete all" operation like so. You can see the using blocks disposing both the DocumentStore and the IDocumentSession objects after each chunk.
static DocumentStore GetDataStore()
{
DocumentStore ds = new DocumentStore
{
DefaultDatabase = "test",
Url = "http://localhost:8080"
};
ds.Initialize();
return ds;
}
static IDocumentSession GetDbInstance(DocumentStore ds)
{
return ds.OpenSession();
}
static void Main(string[] args)
{
do
{
using (var ds = GetDataStore())
using (var db = GetDbInstance(ds))
{
//The `Take` operation will cap out at 1,024 by default, per Raven documentation
var list = db.Query<MyClass>().Skip(deleteSum).Take(5000).ToList();
deleteCount = list.Count;
deleteSum += deleteCount;
foreach (var item in list)
{
db.Delete(item);
}
db.SaveChanges();
list.Clear();
}
} while (deleteCount > 0);
}

Resources