Application Cache and Slow Process - asp.net

I want to create an application wide feed on my ASP.net 3.5 web site using the application cache. The data that I am using to populate the cache is slow to obtain, maybe up to 10 seconds (from a remote server's data feed). My question/confusion is, what is the best way to structure the cache management.
private const string CacheKey = "MyCachedString";
private static string lockString = "";
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
if (data == null)
{
// A - Should this method call go here?
newData = SlowResourceMethod();
lock (lockString)
{
data = (string)Cache[CacheKey];
if (data != null)
{
return data;
}
// B - Or here, within the lock?
newData = SlowResourceMethod();
Cache[CacheKey] = data = newData;
}
}
return data;
}
The actual method would be presented by and HttpHandler (.ashx).
If I collect the data at point 'A', I keep the lock time short, but might end up calling the external resource many times (from web pages all trying to reference the feed). If I put it at point 'B', the lock time will be long, which I am assuming is a bad thing.
What is the best approach, or is there a better pattern that I could use?
Any advice would be appreciated.

I add the comments on the code.
private const string CacheKey = "MyCachedString";
private static readonly object syncLock = new object();
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
// start to check if you have it on cache
if (data == null)
{
// A - Should this method call go here?
// absolut not here
// newData = SlowResourceMethod();
// we are now here and wait for someone else to make it or not
lock (syncLock)
{
// now lets see if some one else make it...
data = (string)Cache[CacheKey];
// we have it, send it
if (data != null)
{
return data;
}
// not have it, now is the time to look for it.
// B - Or here, within the lock?
newData = SlowResourceMethod();
// set it on cache
Cache[CacheKey] = data = newData;
}
}
return data;
}
Better for me is to use mutex and lock depended on the name CacheKey and not lock all resource and the non relative one. With mutex one basic simple example will be:
private const string CacheKey = "MyCachedString";
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
// start to check if you have it on cache
if (data == null)
{
// lock it base on resource key
// (note that not all chars are valid for name)
var mut = new Mutex(true, CacheKey);
try
{
// Wait until it is safe to enter.
// but also add 30 seconds max
mut.WaitOne(30000);
// now lets see if some one else make it...
data = (string)Cache[CacheKey];
// we have it, send it
if (data != null)
{
return data;
}
// not have it, now is the time to look for it.
// B - Or here, within the lock?
newData = SlowResourceMethod();
// set it on cache
Cache[CacheKey] = data = newData;
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}
return data;
}
You can also read
Image caching issue by using files in ASP.NET

Related

I have a "Upload Record" PXAction to load records to grid and release these records

I have a custom PXbutton called UploadRecords, when I click this button I should populate the grid with records and release the records.
Release Action is pressed in the UploadRecords action delegate. The problem I get with this code is, the code here function properly for less records by release action but when passes thousands of records to release, it takes huge time(> 30 min.) and show the error like Execution timeout.
suggest me to avoid more execution time and release the records fastly.
namespace PX.Objects.AR
{
public class ARPriceWorksheetMaint_Extension : PXGraphExtension<ARPriceWorksheetMaint>
{
//public class string_R112 : Constant<string>
//{
// public string_R112()
// : base("4E5CCAFC-0957-4DB3-A4DA-2A24EA700047")
// {
// }
//}
public class string_R112 : Constant<string>
{
public string_R112()
: base("EA")
{
}
}
public PXSelectJoin<InventoryItem, InnerJoin<CSAnswers, On<InventoryItem.noteID, Equal<CSAnswers.refNoteID>>,
LeftJoin<INItemCost, On<InventoryItem.inventoryID, Equal<INItemCost.inventoryID>>>>,
Where<InventoryItem.salesUnit, Equal<string_R112>>> records;
public PXAction<ARPriceWorksheet> uploadRecord;
[PXUIField(DisplayName = "Upload Records", MapEnableRights = PXCacheRights.Select, MapViewRights = PXCacheRights.Select)]
[PXButton]
public IEnumerable UploadRecord(PXAdapter adapter)
{
using (PXTransactionScope ts = new PXTransactionScope())
{
foreach (PXResult<InventoryItem, CSAnswers, INItemCost> res in records.Select())
{
InventoryItem invItem = (InventoryItem)res;
INItemCost itemCost = (INItemCost)res;
CSAnswers csAnswer = (CSAnswers)res;
ARPriceWorksheetDetail gridDetail = new ARPriceWorksheetDetail();
gridDetail.PriceType = PriceTypeList.CustomerPriceClass;
gridDetail.PriceCode = csAnswer.AttributeID;
gridDetail.AlternateID = "";
gridDetail.InventoryID = invItem.InventoryID;
gridDetail.Description = invItem.Descr;
gridDetail.UOM = "EA";
gridDetail.SiteID = 6;
InventoryItemExt invExt = PXCache<InventoryItem>.GetExtension<InventoryItemExt>(invItem);
decimal y;
if (decimal.TryParse(csAnswer.Value, out y))
{
y = decimal.Parse(csAnswer.Value);
}
else
y = decimal.Parse(csAnswer.Value.Replace(" ", ""));
gridDetail.CurrentPrice = y; //(invExt.UsrMarketCost ?? 0m) * (Math.Round(y / 100, 2));
gridDetail.PendingPrice = y; // (invExt.UsrMarketCost ?? 0m)* (Math.Round( y/ 100, 2));
gridDetail.TaxID = null;
Base.Details.Update(gridDetail);
}
ts.Complete();
}
Base.Document.Current.Hold = false;
using (PXTransactionScope ts = new PXTransactionScope())
{
Base.Release.Press();
ts.Complete();
}
List<ARPriceWorksheet> lst = new List<ARPriceWorksheet>
{
Base.Document.Current
};
return lst;
}
protected void ARPriceWorksheet_RowSelected(PXCache cache, PXRowSelectedEventArgs e, PXRowSelected InvokeBaseHandler)
{
if (InvokeBaseHandler != null)
InvokeBaseHandler(cache, e);
var row = (ARPriceWorksheet)e.Row;
uploadRecord.SetEnabled(row.Status != SPWorksheetStatus.Released);
}
}
}
First, Do you need them all to be in a single transaction scope? This would revert all changes if there is an exception in any. If you need to have them all committed without any errors rather than each record, you would have to perform the updates this way.
I would suggest though moving your process to a custom processing screen. This way you can load the records, select one or many, and use the processing engine built into Acumatica to handle the process, rather than a single button click action. Here is an example: https://www.acumatica.com/blog/creating-custom-processing-screens-in-acumatica/
Based on the feedback that it must be all in a single transaction scope and thousands of records, I can only see two optimizations that may assist. First is increasing the Timeout as explained in this blog post. https://acumaticaclouderp.blogspot.com/2017/12/acumatica-snapshots-uploading-and.html
Next I would load all records into memory first and then loop through them with a ToList(). That might save you time as it should pull all records at once rather than once for each record.
going from
foreach (PXResult<InventoryItem, CSAnswers, INItemCost> res in records.Select())
to
var recordList = records.Select().ToList();
foreach (PXResult<InventoryItem, CSAnswers, INItemCost> res in recordList)

MemoryCache.Default.AddOrGetExisiting returns null although the key is in the cache

I am writing unit tests for my asp.net web API application and one of them is trying to verify that AddOrGetExisting is working correctly. According to the MSDN documentation, AddOrGetExisting returns an item if it's already saved, and if not it should write it into Cache.
The problem I am having is that if I add the key to MemoryCache object from an unit test, then call AddOrGetExisting, it will always return null and overwrite the value instead of returning the value that is already saved. I am verifying that the value is in the cache right before I call AddOrGetExisting(bool isIn evaluates to true).
Here is the code for my memory cache and the test method. Any help would be much appreciated:
public static class RequestCache
{
public static TEntity GetFromCache<TEntity>(string key, Func<TEntity> valueFactory) where TEntity : class
{
ObjectCache cache = MemoryCache.Default;
var newValue = new Lazy<TEntity>(valueFactory);
CacheItemPolicy policy = new CacheItemPolicy { AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(60) };
bool isIn = cache.Contains(key);
// Returns existing item or adds the new value if it doesn't exist
var value = cache.AddOrGetExisting(key, newValue, policy) as Lazy<TEntity>;
return (value ?? newValue).Value;
}
}
public string TestGetFromCache_Helper()
{
return "Test3and4Values";
}
[TestMethod]
public void TestGetFromCache_ShouldGetItem()
{
ObjectCache cache = MemoryCache.Default;
CacheItemPolicy policy = new CacheItemPolicy { AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(60) };
var cacheKey = "Test3";
var expectedValue = "Test3Value";
cache.AddOrGetExisting(cacheKey, expectedValue, policy);
var result = Models.RequestCache.GetFromCache(cacheKey,
() =>
{
return TestGetFromCache_Helper();
});
Assert.AreEqual(expectedValue, result);
}
The issue may be that you're passing a Lazy<TEntity> as newValue within RequestCache.GetFromCache but passing a string as expectedValue in the test method.
When running the test, the cache.Contains(key) confirms that there is a value stored for that key, which is true. However it is a string instead of a Lazy<TEntity>. Apparently AddOrGetExisting decides to overwrite the value in that case.
The fix for this particular scenario may be to adjust the expectedValue assignment in your test to something like this:
var expectedValue = new Lazy<string>(TestGetFromCache_Helper);
You'd also need to pull the value from the Lazy in the test's final equality comparison, for example:
Assert.AreEqual(expectedValue.Value, result);

how can we store a html page into sqlite in blackberry on memory card / phone memory?

Below code specifies that we we can make http connection in blackberry and how to store html page as a string?
I am doing this but I am able to get that http request but when I get response i.e http_ok it is not correct so that I can save text oh html as a string and I can further store that into sqlite.
LabelField title = new LabelField("SQLite Create Database Sample",
LabelField.ELLIPSIS |
LabelField.USE_ALL_WIDTH);
setTitle(title);
add(new RichTextField("Creating a database."));
argURL="https://www.google.com:80";
try {
connDesc = connFact.getConnection(argURL);
if (connDesc != null) {
httpConn = (HttpConnection) connDesc.getConnection();
// //Send Data on this connection
// httpConn.setRequestMethod(HttpConnection.GET);
// //Server Response
StringBuffer strBuffer = new StringBuffer();
inStream = httpConn.openInputStream();
int chr;
int retResponseCode = httpConn.getResponseCode();
if (retResponseCode == HttpConnection.HTTP_OK) {
if (inStream != null) {
while ((chr = inStream.read()) != -1) {
strBuffer.append((char) chr);
}
serverResponceStr = strBuffer.toString();
// appLe.alertForms.get_userWaitAlertForm().append("\n"+serverResponceStr);
//returnCode = gprsConstants.retCodeSuccess;
}
} else {
//returnCode = gprsConstants.retCodeNOK;
}
}
} catch (Exception excp) {
//returnCode = gprsConstants.retCodeDisconn;
excp.printStackTrace();
} `enter code here`
The code does not perform any database functionality, however I tested and it does successfully perform an HttpRequest to an external URL. The data that comes back is based on the response of the server you are making the request to.
The code I used can be found here:
http://snipt.org/vrl7
The only modifications is to keep a running summary of various events, and the response is displayed in the RichTextField. Basically, this looks to be working as intended, and the resulting String should be able to be saved however you see fit; though you may need to be cautious of encoding when saving to a database so that special characters are not lost or misinterpreted.

Raven DB DocumentStore - throws out of memory exception

I have code like this:
public bool Set(IEnumerable<WhiteForest.Common.Entities.Projections.RequestProjection> requests)
{
var documentSession = _documentStore.OpenSession();
//{
try
{
foreach (var request in requests)
{
documentSession.Store(request);
}
//requests.AsParallel().ForAll(x => documentSession.Store(x));
documentSession.SaveChanges();
documentSession.Dispose();
return true;
}
catch (Exception e)
{
_log.LogDebug("Exception in RavenRequstRepository - Set. Exception is [{0}]", e.ToString());
return false;
}
//}
}
This code gets called many times. After i get to around 50,000 documents that have passed through it i get an OutOfMemoryException.
Any idea why ? perhaps after a while i need to declare a new DocumentStore ?
thank you
**
UPDATE:
**
I ended up using the Batch/Patch API to perform the update I needed.
You can see the discussion here: https://groups.google.com/d/topic/ravendb/3wRT9c8Y-YE/discussion
Basically since i only needed to update 1 property on my objects, and after considering ayendes comments about re-serializing all the objects back to JSON, i did something like this:
internal void Patch()
{
List<string> docIds = new List<string>() { "596548a7-61ef-4465-95bc-b651079f4888", "cbbca8d5-be45-4e0d-91cf-f4129e13e65e" };
using (var session = _documentStore.OpenSession())
{
session.Advanced.DatabaseCommands.Batch(GenerateCommands(docIds));
}
}
private List<ICommandData> GenerateCommands(List<string> docIds )
{
List<ICommandData> retList = new List<ICommandData>();
foreach (var item in docIds)
{
retList.Add(new PatchCommandData()
{
Key = item,
Patches = new[] { new Raven.Abstractions.Data.PatchRequest () {
Name = "Processed",
Type = Raven.Abstractions.Data.PatchCommandType.Set,
Value = new RavenJValue(true)
}}});
}
return retList;
}
Hope this helps ...
Thanks alot.
I just did this for my current project. I chunked the data into pieces and saved each chunk in a new session. This may work for you, too.
Note, this example shows chunking by 1024 documents at a time, but needing at least 2000 before we decide it's worth chunking. So far, my inserts got the best performance with a chunk size of 4096. I think that's because my documents are relatively small.
internal static void WriteObjectList<T>(List<T> objectList)
{
int numberOfObjectsThatWarrantChunking = 2000; // Don't bother chunking unless we have at least this many objects.
if (objectList.Count < numberOfObjectsThatWarrantChunking)
{
// Just write them all at once.
using (IDocumentSession ravenSession = GetRavenSession())
{
objectList.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
return;
}
int numberOfDocumentsPerSession = 1024; // Chunk size
List<List<T>> objectListInChunks = new List<List<T>>();
for (int i = 0; i < objectList.Count; i += numberOfDocumentsPerSession)
{
objectListInChunks.Add(objectList.Skip(i).Take(numberOfDocumentsPerSession).ToList());
}
Parallel.ForEach(objectListInChunks, listOfObjects =>
{
using (IDocumentSession ravenSession = GetRavenSession())
{
listOfObjects.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
});
}
private static IDocumentSession GetRavenSession()
{
return _ravenDatabase.OpenSession();
}
Are you trying to save it all in one call?
The DocumentSession need to turn all of the objects that you pass it into a single request to the server. That means that it may allocate a lot of memory for the write to the server.
Usually we recommend on batches of about 1,024 items in you are doing bulks saves.
DocumentStore is a disposable class, so I worked around this problem by disposing the instance after each chunk. I highly doubt this is the most efficient way to run operations, but it will prevent significant memory overhead from happening.
I was running a sort of "delete all" operation like so. You can see the using blocks disposing both the DocumentStore and the IDocumentSession objects after each chunk.
static DocumentStore GetDataStore()
{
DocumentStore ds = new DocumentStore
{
DefaultDatabase = "test",
Url = "http://localhost:8080"
};
ds.Initialize();
return ds;
}
static IDocumentSession GetDbInstance(DocumentStore ds)
{
return ds.OpenSession();
}
static void Main(string[] args)
{
do
{
using (var ds = GetDataStore())
using (var db = GetDbInstance(ds))
{
//The `Take` operation will cap out at 1,024 by default, per Raven documentation
var list = db.Query<MyClass>().Skip(deleteSum).Take(5000).ToList();
deleteCount = list.Count;
deleteSum += deleteCount;
foreach (var item in list)
{
db.Delete(item);
}
db.SaveChanges();
list.Clear();
}
} while (deleteCount > 0);
}

Picking out Just JSON Data Returned from ASP.NET MVC3 controller Update

I've got data returned from my JavaScript client that just includes the data that has changed. That is, I may have an array with each row containing 10 columns of JSON downloaded, but on the Update, only the data that is returned to me is the data that got updated. On my update, I only want to update those columns that are changed (not all of them).
In other words, I have code like below but because I'm passing in an instance of the "President" class, I have no way of knowing what actually came in on the original JSON.
How can I just update what comes into my MVC3 update method and not all columns. That is, 8 of the columns may not come in and will be null in the "data" parameter passed in. I don't want to wipe out all my data because of that.
[HttpPost]
public JsonResult Update(President data)
{
bool success = false;
string message = "no record found";
if (data != null && data.Id > 0)
{
using (var db = new USPresidentsDb())
{
var rec = db.Presidents.FirstOrDefault(a => a.Id == data.Id);
rec.FirstName = data.FirstName;
db.SaveChanges();
success = true;
message = "Update method called successfully";
}
}
return Json(new
{
data,
success,
message
});
}
rec.FirstName = data.FirstName ?? rec.FirstName;
I would use reflection in this case because the code will be too messy like
if (data.FirstName != null)
rec.FirstName = data.FirstName
.
.
.
and so on for all the fields
Using reflection, it would be easier to do this. See this method
public static void CopyOnlyModifiedData<T>(T source, ref T destination)
{
foreach (var propertyInfo in source.GetType().GetProperties())
{
object value = propertyInfo.GetValue(source, null);
if (value!= null && !value.GetType().IsValueType)
{
destination.GetType().GetProperty(propertyInfo.Name, value.GetType()).SetValue(destination, value, null);
}
}
}
USAGE
CopyOnlyModifiedData<President>(data, ref rec);
Please mind that, this won't work for value type properties.

Resources