caching a subsonicproject activerecord caches entire object graph - asp.net

I am caching a collection of activerecord rows (subsonic). When I look at the cache with ANTS Memory Profiler, I can see that some related tables to the activerecord I would like to cache are cached as well. This makes the cached items very large, because of the additionally (not needed) cached tables.
Any ideas on how to prevent this?

I believe you will have to modify or remove the lazy-loading of relationships in the Active Record classes.
The lazy-loading behavior is generated by the ActiveRecord.tt template, starting at line 300 in the most current version:
#region ' Foreign Keys '
<#
List<string> fkCreated = new List<string>();
foreach(FKTable fk in tbl.FKTables)
{
if(!ExcludeTables.Contains(fk.OtherTable)){
string propName=fk.OtherQueryable;
if(fkCreated.Contains(propName))
{
propName=fk.OtherQueryable+fkCreated.Count.ToString();
}
fkCreated.Add(fk.OtherQueryable);
#>
public IQueryable<<#=fk.OtherClass #>> <#=propName #>
{
get
{
var repo=<#=Namespace #>.<#=fk.OtherClass#>.GetRepo();
return from items in repo.GetAll()
where items.<#=CleanUp(fk.OtherColumn)#> == _<#=CleanUp(fk.ThisColumn)#>
select items;
}
}
<#
}
}
#>
#endregion
I would try removing this entire region and seeing if the excessive caching is resolved. Of course, if you rely on the lazy-loading behavior you will have to address that now.

Related

How can I optimize this function get all values in a redis json database?

My function
public IQueryable<T> getAllPositions<T>(RedisDbs redisDbKey)
{
List<T> positions = new List<T>();
List<string> keys = new List<string>();
foreach (var key in _redisServer.Keys((int)redisDbKey))
{
keys.Add(key.ToString());
}
var sportEventRet = _redis.GetDatabase((int)redisDbKey).JsonMultiGetAsync(keys.ToArray());
foreach (var sportEvent in sportEventRet.Result)
{
var redisValue = (RedisValue)sportEvent;
if(!redisValue.IsNull)
{
var positionEntity = JsonConvert.DeserializeObject<T>(redisValue, jsonSerializerSettings);
positions.Add(positionEntity);
}
}
return positions.AsQueryable();
}
Called as
IQueryable<IPosition> union = redisClient.getAllPositions<Position>(RedisDbs.POSITIONDB);
Where Position is a simple model with just a few simple properties. And RedisDbs is just an enum representing an int for a specific database. With both this application and the redisjson instance running locally on a high performance server, it takes two seconds for this function to return a database with 20k json values in it. This is unacceptable for my specific usecase, I need this to be done in the maximum of 1 second, preferably sub 600ms. Are there any optimizations I could make to this?
I'm convinced the problem is with the KEYS command.
Here is what is written about Keys command in redis.io:
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance
when it is executed against large databases. This command is intended
for debugging and special operations, such as changing your keyspace
layout. Don't use KEYS in your regular application code.
You can save the list of your json keys and then use them in your function instead of calling the keys command.

Looping through keys in ASP.NET cache object

Caching in ASP.NET looks like it uses some kind of associative array:
// Insert some data into the cache:
Cache.Insert("TestCache", someValue);
// Retrieve the data like normal:
someValue = Cache.Get("TestCache");
// But, can be done associatively ...
someValue = Cache["TestCache"];
// Also, null checks can be performed to see if cache exists yet:
if(Cache["TestCache"] == null) {
Cache.Insert(PerformComplicatedFunctionThatNeedsCaching());
}
someValue = Cache["TestCache"];
As you can see, performing a null check on the cache object is very useful.
But I would like to implement a cache clear function that can clear cache values
where I don't know the whole key name. As there seems to be an associative
array here, it should be possible (?)
Can anyone help me work out a way of looping through the stored cache keys and
performing simple logic on them? Here's what I'm after:
static void DeleteMatchingCacheKey(string keyName) {
// This foreach implementation doesn't work by the way ...
foreach(Cache as c) {
if(c.Key.Contains(keyName)) {
Cache.Remove(c);
}
}
}
Don't use a foreach loop when removing items from any collection type- the foreach loop relies on using an enumerator which will NOT allow you to remove items from the collection (the enumerator will throw an exception if the collection it is iterating over has items added or removed from it).
Use a simple while to loop over the cache keys instead:
int i = 0;
while (i < Cache.Keys.Length){
if (Cache.Keys(i).Contains(keyName){
Cache.Remove(Cache.Keys(i))
}
else{
i ++;
}
}
An other way to do it in .net core:
var keys = _cache.Get<List<string>>(keyName);
foreach (var key in keys)
{
_cache.Remove(key);
}

Random failures using CMISQL queries on Alfresco 3.3.0

[Solved, it seems that there was some bug affecting Alfresco 3.3.0, which is no longer present on Alfresco 3.3.0g]
Hi,
I'm using OpenCMIS to retrieve data from Alfresco 3.3, but it's having a very weird behaviour on CMISQL queries. I've googled somebody else with the same problems, but it seems I'm the first one all over the world :), so I guess it's my fault, not OpenCMIS'.
This is how I'm querying Alfresco:
public Class CmisTest {
private static Session sesion;
private static final String QUERY = "select cmis:objectid, cmis:name from cmis:folder where cmis:name='MyFolder'";
public static void main(String[] args) {
// Open a CMIS session with Alfresco
Map<String, String> params = new HashMap<String, String>();
params.put(SessionParameter.USER, "admin");
params.put(SessionParameter.PASSWORD, "admin");
params.put(SessionParameter.ATOMPUB_URL, "http://localhost:8080/alfresco/s/api/cmis");
params.put(SessionParameter.BINDING_TYPE, BindingType.ATOMPUB.value());
params.put(SessionParameter.REPOSITORY_ID, "fa9d2553-1e4d-491b-87fd-3de894dc7ca9");
sesion = SessionFactoryImpl.newInstance().createSession(params);
// Ugly bug in Alfresco which raises an exception if we request more data than it's available
// See https://issues.alfresco.com/jira/browse/ALF-2859
sesion.getDefaultContext().setMaxItemsPerPage(1);
// We repeat the same query 20 times and count the number of elements retrieved each time
for (int i = 0; i < 20; i++) {
List<QueryResult> result = doQuery();
System.out.println(result.size() + " folders retrieved");
}
}
public static List<QueryResult> doQuery() {
List<QueryResult> result = new LinkedList<QueryResult>();
try {
int page = 0;
while (true) {
ItemIterable<QueryResult> iterable = sesion.query(QUERY, false).skipTo(page);
page++;
for (QueryResult qr : iterable) {
result.add(qr);
}
}
} catch (Exception e) {
// We will always get an exception when Alfresco has no more data to retrieve... :(
// See https://issues.alfresco.com/jira/browse/ALF-2859
}
return result;
}
}
As you can see, we just execute the same query, up to 20 times in a row. You would expect the same result each time, wouldn't you? Unfortunately, this is a sample of what we get:
1 folders retrieved
1 folders retrieved
1 folders retrieved
0 folders retrieved
0 folders retrieved
0 folders retrieved
0 folders retrieved
0 folders retrieved
1 folders retrieved
1 folders retrieved
Sometimes we get 20 1 in a row, sometimes it's all 0. We have never get a "mix" of 1 and 0, though; we always get "a run" of them.
It does not matter if we create the session before each query, we still have the random issue. We have tried against two different Alfresco servers (both of them 3.3 Community), clean installation, and they both fail randomly. We also tried to measure the time for each query, but it doesn't seem to have any relation with the result being wrong (0 folders retrieved) or right (1 folders retrieved).
Alfresco seems to be working fine: if we go to "Administration --> Node browser" and launch the CMISQL query from there, it always retrieves one folder, which is right. So, it must be our code, or an OpenCMIS bug...
Any ideas?
I can't reproduce this behavior. It's running fine against http://cmis.alfresco.com . The issue https://issues.alfresco.com/jira/browse/ALF-2859 states that there have been bug fixes. Are you running the latest Alfresco version?
Florian

Working with SubSonic 'deleted' rows

When loading data with SubSonic (either using ActiveRecord or a collection), only records with IsDeleted set to false will load. How can I show those rows that have been deleted?
For example, deleting an Employee with:
Employee.Delete(1)
Now employee 1 is marked as deleted. Now I want the option to undo the delete and / or show a list of deleted employees, how can I do that? Either it will be undone if the user accidentally deleted the employee, or they want to go to a 'trash' list with previously deleted employees (i.e. only those with IsDeleted set to true).
Edit:
Using SubSonic 2.2
ActiveRecord doesn't have this built in. You'll need to set up additional queries for this. You didn't specify 2.2 or 3.0. This is 2.2 syntax.
public EmployeeCollection FetchAll(bool isDeleted)
{
return new SubSonic.Select().From(Employee.Schema).Where(IsDeletedColumn).IsEqualTo(isDeleted).ExecuteAsCollection<EmployeeCollection>();
}
public EmployeeCollection GetTrashList()
{
return FetchAll(true);
}
I was running into this problem yesterday with subsonic 3 and decided to alter the T4 templates to "fix" it. I added these definitions for a new function LogicalAll. As an alternative you could change the definitions of All to this but then you would have no way of getting at the deleted records.
public static IQueryable<<#=tbl.ClassName#>> LogicalAll(string connectionString, string providerName) {
<#if(tbl.HasLogicalDelete()){#>
var results = GetRepo(connectionString,providerName).GetAll();
if(results == null)
{
return new List<<#=tbl.ClassName#>>().AsQueryable();
}
return results.Where(x=> x.<#=tbl.DeleteColumn.CleanName#> == false);
<#} else {#>
return GetRepo(connectionString,providerName).GetAll();
<# } #>
}
public static IQueryable<<#=tbl.ClassName#>> LogicalAll() {
<#if(tbl.HasLogicalDelete()){#>
var results = GetRepo().GetAll();
if(results == null)
{
return new List<<#=tbl.ClassName#>>().AsQueryable();
}
return results.Where(x=> x.<#=tbl.DeleteColumn.CleanName#> == false);
<#} else {#>
return GetRepo().GetAll();
<# } #>
}
I'm running into the same issue.
I'm working in a project that's using the ActiveRecord scheme. I can retrieve logically deleted records just fine by querying for them specifically.
The problem is that the ActiveRecord generated classes do not have any properties or methods to modify the deleted status of the record.
It should be as simple as setting "IsDeleted = false" but this functionality doesn't seem to exist.
-- Nevermind on this. I regenerated my ActiveRecord class, and now my Deleted column is accessible by calling code. Must've gotten stuck somewhere.
it is easy to show these rows simply by creating a query by hand instead of using the collection loaders
ie.
ProductsCollection col = new ProductsCollection().Load();
becomes
ProductsCollection col = new Select()
.From(Tables.Products)
.ExecuteAsCollection<ProductsCollection>();
This should load everything for you. Futhermore you can set the options yourself:
ProductsCollection col = new Select()
.From(Tables.Products)
.Where(Products.Columns.IsDeleted).IsEqualTo(false)
.And(Products.Columns.IsDeleted).IsEqualTo(null)
.ExecuteAsCollection<ProductsCollection>();
This would load all the nulls (if you forgot to set your default value on your column to false) AND it will also load the falses
Hope this helps

ASP.NET Cache - circumstances in which Remove("key") doesn't work?

I have an ASP.NET application that caches some business objects. When a new object is saved, I call remove on the key to clear the objects. The new list should be lazy loaded the next time a user requests the data.
Except there is a problem with different views of the cache in different clients.
Two users are browsing the site
A new object is saved by user 1 and the cache is removed
User 1 sees the up to date view of the data
User 2 is also using the site but does not for some reason see the new cached data after user 1 has saved a new object - they continue to see the old list
This is a shortened version of the code:
public static JobCollection JobList
{
get
{
if (HttpRuntime.Cache["JobList"] == null)
{
GetAndCacheJobList();
}
return (JobCollection)HttpRuntime.Cache["JobList"];
}
}
private static void GetAndCacheJobList()
{
using (DataContext context = new DataContext(ConnectionUtil.ConnectionString))
{
var query = from j in context.JobEntities
select j;
JobCollection c = new JobCollection();
foreach (JobEntity i in query)
{
Job newJob = new Job();
....
c.Add(newJob);
}
HttpRuntime.Cache.Insert("JobList", c, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default, null);
}
}
public static void SaveJob(Job job, IDbConnection connection)
{
using (DataContext context = new DataContext(connection))
{
JobEntity ent = new JobEntity();
...
context.JobEntities.InsertOnSubmit(ent);
context.SubmitChanges();
HttpRuntime.Cache.Remove("JobList");
}
}
Does anyone have any ideas why this might be happening?
Edit: I am using Linq2SQL to retreive the objects, though I am disposing of the context.
I would ask you to make sure you do not have multiple production servers for load balancing purpose. In that case you will have to user some external dependency architecture for invalidating/removing the cache items.
That's because you don't synchronize cache operations. You should lock on writing your List to the cache (possibly even get the list inside the lock) and on removing it from the cache also. Otherwise, even if reading and writing are synchronized, there's nothing to prevent storing the old List right after your call to Remove. Let me know if you need some code example.
I would also check, if you haven't already, that the old data they're seeing hasn't been somehow cached in ViewState.
You have to make sure that User 2 sent a new request. Maybe the content it saws is from it's browser's cache, not the cache from your server

Resources