How to unload objects from memory while reading realm data in batches - realm

Following is code example (in swift) given by Realm team to read subset of entire result set:
// Loop through the first 5 Dog objects
// restricting the number of objects read from disk
let dogs = try! Realm().objects(Dog.self)
for i in 0..<5 {
let dog = dogs[i]
// ...
}
Ref: https://realm.io/docs/swift/latest/ (Limiting Results)
Now I understand objects are loaded as we read them from realm, I am little confused as how realm unloads them from memory as we keep progressing. For example, if my query returns 100 objects as a result set and I read first 10 objects, then on user interaction I read next 10 and so on. Now what will happen to previously read objects in memory. How can I control memory usage in such cases as I may want to unload previous subset when I read a newer one.
My guess is I may need invalidate old realm and make a new connection to fetch result set again and read again from a new offset. Invalidating may unload everything read previously. Please suggest.
Thank you!

Related

What happens if we don't delete object handle? How it affects? How do we see that? - PROGRESS 4GL

I am using below query which has handle but I can see nothing happens even if I delete/not delete the object of the handle. But everyone says always delete the object finally. Why we need to delete them? what happens if we don't delete them? How do we see that ?
finally:
if valid-handle(hQueryTest) then delete object hQueryTest no-error.
if valid-handle(hQuerytestvalue) then delete object hQuerytestvalue no-error.
end finally.
OpenEdge simply does not have a reference count based garbage collector for handle based objects. So the object the query handle points to will remain in memory of the AVM forever. If that's on an AppServer, the memory consumption of the AppServer process may grow slightly but steadily.
OpenEdge has the concept of WIDGET-POOLs that can support with memory management.
You can check the DynObjects.* Log-entry-type to get insights into the Life-Time of dynamic objects, handle-based or class-based.

Slowdown issue in web project

I just need suggestion in this case. There is a PIN code field in my project in asp.net environment. I have stored 50,000 around pin code in sql server database. When I run project in local host, it becomes slow down. Since I have a drop-down to get value from database. I think it is because of huge data is being rendered into html, since when I click on view source at run-time, I can see all the PIN-code inside it.
Moreover, I have also done this for Select CITY, and STATE from database in a same way.
I will really appreciate you, if you get me any logic or technique to lessen this slowdown
If you are using all the Pincode in the single page then You have multiple option to optimized this slow down If this is in initialized phase then Try MongoDB ,No SQL DB otherwise go for Solr , Redis that gives fast accessing of the data. If you are not able to using these then You can optimised it by eager loading , Cache Storing of data.
If its not in single page then break it to batch via paginate the pincode.
This is common problem with any website where we deal with large amount of data. To be frank there is no code level solution for this. You need to select any of following approach.
You can try multiple options for faster retrieval.
Caching -
Use redis or memcache - in simpler words, on the first request cache manager will read and store your data from SQL server. For subsequent requests, data will be served from cache.
Also, don't forget to make a provision to invalidate the data when new pin codes are added.
Edit: You can also use object caching provided by .Net framework. Refer: object caching
Code will be something like.
if (Cache["key_pincodes"] == null)
{
// if No object is present in Cache, add it to the cache with expiry time of 10 minutes
// Read data to datatable or any object
DataTable pinCodeObject = GetPinCodesFromdatabase();
Cache.Insert("key_pincodes", pinCodeObject, null, DateTime.MaxValue, TimeSpan.FromMinutes(10));
}
else // If pinCodes are cached, dont make Database call and read it from cache
{
// This will get execute
DataTable pinCodeObject = (DataTable)Cache["key_pincodes"];
}
// bind it your dropdown
No-sql database-
MongoDB, XML, Txt files could be used to read the data. It will take much lesser time than the database hit.

Firebase / AngularFire limited object deletes its properties on $save()

I have a primary node in my database called 'questions', when I create a ref to that node and bring it into my project as a $asObject(), I can modify the individual questions and $save() the collection without any problems, however as soon as I try to limit the object, by priority, the $save() deletes everything off of the object!
this works fine:
db.questions = $firebase(fb.questions).$asObject();
// later :
db.questions.$save();
// db.questions is an object with many 'questions', which I can edit and resave as I please
but as soon as I switch my code to this:
db.questions = $firebase(fb.questions.startAt(auth.user.id).endAt(auth.user.id)).$asObject();
// later :
db.questions.$save();
// db.questions is an empty firebase object without any 'questions!'
Is there some limitation to limited objects (pun not intended) and their ability to be changed and saved?? The saving actually saves updates to the questions to the database, but somehow nukes the local $firebase object...
First line of synchronized arrays ($asArray) documentation:
Synchronized arrays should be used for any list of objects that will be sorted, iterated, and which have unique ids.
First line of synchronized objects ($asObject) documentation:
Objects are useful for storing key/value pairs, and singular records that are not used as a collection.
As demonstrated, if you are going to work with a collection and employ limit, it would behoove you to use a tool designed for collections (i.e. $asArray).
If you were to recreate the behavior of $save using the Firebase SDK, it would look like this:
var ref = new Firebase(URL).limit(10);
// ref.set(data); // throws an error!
ref.ref().set(data); // replaces the entire path; same as $save
Thus, the behavior here exactly matches the SDK. You cannot, technically, call set() on a query instance and this doesn't make any sense, really. What does limit(10) mean to a JSON object? If you call set, which 10 unordered keys should be set? There is no correlation here and limit() really only makes sense with a collection of data, not a list of key/value pairs.
Hope that helps.

How to share data between threads?

In the main thread I open a new thread that gets the number of new messages of user (takes about 5 secs) and this second thread should save the number in some place.
In the main thread I should check the "some place" and if the value exists I display it on the page.
Where can I save the value from the second thread to read it from the main one? This value is unique per user so I can't use static field.
Thank you for advance!
You can use static dictionary with user id as key and result as value. Protect dictionary access with locks. After main thread reads value, you can clear it from dictionary.
Use critical section to protect access to some data when several threads can read/write it. Use singleton instance to store data, global variable, registry pattern or whatever.
The way I do it, i have a vector od "ThreadData" elements.
Each started thread gets this element when started and it can update that data (protected by mutexes).
The main thread simply checks some flag in the element (ThreadState -- Running, Idle, Stopped, etc) and read the other data which the thread updated.

Need suggestion for ASP.Net in-memory queue

I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.

Resources