Coherence: Pagination In Transactional Cache - oracle-coherence

Does anybody know if it is possible to implement pagination over entry set while using transactional scheme in Coherence? The LimitFilter way does not work as it is not supported in Transactional scheme.

I have to admit that this is a very complicated use case, and you are correct that the LimitFilter doesn't help here.
For queries of a "manageable size", my advice is to cache the contents of the keySet() itself where the paged results are needed, and use those contents for pagination. For example, store off keySet(query).toArray(), and then to access any chunk of the entries from that cache, e.g. if a page size is 20, then this should grab the entries for one page (where "page" is a 0-based int):
Map mapForOnePage = cache.getAll(new ImmutableArrayList(arrayOfKeys, page*20, 20));
(ImmutableArrayList is in the com.tangosol.util package.)
Hope this helps :)
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.

Related

Meteor Approach to Collections Considering Join Aren't Supported by Core Yet?

I'm creating my main project functionality right now so it's kind of a big decision to make in my project, I want efficient & scalable solution. I use different API's to fetch users products ultimately for 1 collection to display products information inside a table with possible merge by SKU TITLE from different sources.
I have thought of 2 approaches (In both approaches we add Meteor.userId() to collection insert so each users has it's own products:
1) to create each API it's own collection and fetch the products to it, after or in middle of the API query where I insert it to sourceXProducts also add the logic of merge products by sku and add it to main usersProducts Only the fields I need, and we have the collection of the sourceXproducts if we ever need anything we didn't really include to main usersProducts we can query it and get it so we basically keep all the information possible (because it can come handy)
source1Products = new Meteor.Collection('source1Products');
source2Products = new Meteor.Collection('source2Products');
usersProducts = new Meteor.Collection('usersProducts');
Pros: Honestly I'm not sure, It makes it organized also the way I learned Meteor it seems to be used a lot.
Cons: Meteor collection joins is not supported in core yet, So I have to use a meteor package such as: meteor-publish-composite which seems good but this way might hit performance
2) Create 1 collection and just insert everything the API resonse has and additional apiSource field so we can choose products from X user X api.
usersProducts = new Meteor.Collection('usersProducts');
Pros: No joins, possibly better performance
Cons: Not organized, It can become a large collection maybe it's not good for mongodb
3) Your ideas? :)
First, you should improve the question. You do not tell us anything precise about your schema. What are the entities you have and what type of relations are there and what type of joins do you think you will be doing. How often you will be doing them?
Second, you should rethink your schema and think in the terms of a non-relational database. I see many people coming from SQL world and then they simply design their schema in the same way. Wrong. MongoDB is not SQL and things you learned there you should not try to just reuse here. You should start using features like subdocuments and arrays which can help you solve many basic things you would do in SQL with joins. So, knowing your schema would help us help you design the schema. For example, see this answer and especially the comments for the discussion for a similar type of question you are asking here.
Third, have you evaluated various solutions which exist out there? There are many, but you have not shown us that you tried any of them and how it worked for you. What were pros and cons of them, for you and your project?
Fourth, if you are lazy to evaluate, you can just use peerlibrary:peerdb and peerlibrary:related. They are simply perfect. You should trust me. I am their author.

Is there a place for generic Wordpress Extension tables?

Wordpress's datamodel provides extensibility through it's "meta" tables (aka, wp_postmeta, wp_commentmeta, wp_usermeta). This name/value pair linking is extremely flexible and supported by Wordpress's API. All good reasons to celebrate it but it also can get quite non-performant if you have lots of extension attributes hanging off of your transactions. Let's say, for instance, that I have a Custom Post Type called "Blood Test" and I want it to be able to capture 30-40 different measurements coming out of the blood analysis. Is it practical to use this wp_postmeta joining for each test? Probably not.
I could go completely bespoke and build tables left and right but what I'm wondering is, is there not a way to at least build an "80/20 rule" and have a generic extension table that provides a static number of additional columns to put SQL-searchable attributes and then end with a column for a JSON object that would allow for non-SQL searchable attributes to extend to an almost unlimited amount? Something like what is diagrammed below:
I was thinking that by doing this one could also extend the Wordpress API so that most development could largely be ignorant of this structural difference. Something like this:
EXAMPLE OF REGISTRATION API:
$my_meta_ext = new WP_Meta_Extension( 'post' );
$my_meta_ext->add_tran_type( 'blood-measurement' , ( 'col1' => 'total-cholesterol' , 'col2' =>
'triglycerides', 'col3' => 'etc');
EXAMPLE OF USAGE API:
add_ext_meta ( $term_id, 'blood-measurement.total-cholesterol', $meta_value );
Ok, so here are my explicit questions for the group:
Does this make sense? Do you see value in this? Does anything like this already exist in a 'plugin' form (or otherwise)?
If you were to use one extension table across all types (aka, posts, comments, users, etc.) would you be able to build an index in mySQL that was efficient? Is it better just to have a table per type?
Does anyone have any metrics -- even if they're not fully proven out -- on when the default wordpress extension model starts to degrade in performance?
If you've done something like this already ... any key lessons learned when extending the model? How complicated of a task is this?
I brought this topic into a specialist WordPress working group in London and while we didn't arrive on an answer there was pretty clear agreement that this type of solution would be highly valuable. Furthermore, I have decided to start a GIT repository with the intent of developing a plugin to address this (hopefully with the generous support of the group in London or any of you who are reading this and feel you could help).
The repository can be found here:
https://bitbucket.org/ksnyder/wp-database-extension/
Please understand that at this moment there is really just an idea, a high-level design, an interface spec, but no real code. That will change of course and it will change more quickly if you're willing to help us with this endeavour (mind you I have no personal experience running distributed development but you have to learn somewhere).

What are the rules for deciding when a new catalog should be created?

I'd like to learn about using catalogs correctly.
I have about 30 useful content types, about 50 indexes in catalog.xml, and about 45 metadatas. There are just three types which account for most of the site's data - and I may need millions of these. I've been reading, and there's lots to do, but I want to have the basic configuration right before I begin all that.
This page told me that any non-default indexes should not be added to the portal_catalog. I've even read people explaining how removing one, or two of the default indexes makes a performance difference.
My question is: what are the rules for dividing up the indexes into different catalogs, and for selecting which catalog(s) index which type(s)?
So far I have created one additional catalog, used to catalog all indexes for my 'site-setup' objects (which I have caused to no longer be indexed in portal_catalog). The site-setup indexes are very often used, but more rarely modified than others, so I thought it was correct to separate them from objects which are reindexed more often. I'm not sure if that's the main consideration though.
Another similar question (a good example of the kind of thing I want to solve): how would you handle something like secondary workflow review_state variables? I give each workflow's review_state variable an index (and search on them quite often), but some of my workflows are only used on just a few types. (my most prolific objects have secondary workflows...)
I'd be very grateful for advice!
Campbell
This won't cover everything but I'll bring up some points..
Anything not in the portal_catalog won't work with collections, folder_contents view, getFolderContents method, search, portlet collections, related items(I think) and anything else the assumes you're using the portal_catalog.
I like to use an additional catalog when I need to be able to query the data but it only affects a sub-set of the content objects.
Use collective.indexing to speed up indexing operations.
Mount the catalogs on their own mount points so you can cache them differently from the rest of the site(so you can cache the whole catalog). Then, you can even serve the the catalogs from dedicated zeoserver.
Also, if your content doesn't have to be cataloged by the portal_catalog(with all the constraints listed), you may even want to think about if you need it as a full-fledged (archetype|dexterity) type in the first place. You can use a more slim repoze.catalog to catalog arbitrary objects(which could be very simple data) for whatever your purpose is and get even more performance. Or better yet, look into Solr for indexing it for VERY good performance.
On more thing, depending on the type of data you're storing, you could even look into using a relational database for a data store. But I don't know what kind of queries, indexes, data, etc you have...
30 different types seems like a lot but I don't know what your use case is. Care to share? Perhaps there is a better way to do it.

node_load or direct query?

What rule of thumb do you use for deciding to use node_load() or just writing a direct db_query()?
In a situation I'm looking at right now I need to get some node data and resolve data on two nodereference fields. So that would be 3 calls to node_load(). At some point here, would it be more efficient to construct the query with Joins directly?
This is for use in a self contained module that won't be distributed or used anywhere else, so I don't believe I need to worry about subverting node modification hooks (or do I?).
Edit:
Thinking about my question more, node_load() is only really applicable when you have one node to grab (and then maybe drilling down further into nodereferences like in my example). But as soon as you need to return more than one node based on some criteria, you're pretty much forced to use db_query right? Does Drupal have any abstracted API for writing queries like this?
Not a full answer (Not sure myself), just some hints.
node_load() is using a static cache (in Drupal 7, you can even use the entity_cache module to make it a permanent cache). If the nodes you are loading are being used a second time on the same page, that call will be free.
Querying CCK-tables is tricky. The schema structure can change completely based on configuration, for example when using a single or multiple values.
The reasoning behind using API methods for DB calls over direct DB calls is to provide a DB abstraction layer so that your app could move between supported database engines etc, also it enables your app to gracefully handle any schema changes (however unlikely) that core/module may make to the tables in question. It's also likely easier as #Berdir says for CCK fields and Node_Ref fields, but that depends on which you are more confident with Drupal API& PHP or MySQL...the payoff of doing it the Drupal way is increased future productivity and understanding of the codebase and what is possible :)
Oh and my rule of thumb is - Do it the Drupal way if at all possible (possible being variable depending on app time/cost/performance/whatever requirements)

Drupal Views api, add simple argument handler

Background: I have a complex search form that stores the query and it's hash in a cache. Once the cache is set, I redirect to something like /searchresults/e6c86fadc7e4b7a2d068932efc9cc358 where that big long string on the end is the md5 hash of my query. I need to make a new argument for views to know what the hash is good for.
The reason for all this hastle is because my original search form is way to complex and has way to many arguments to consider putting them all into the path and expecting to do the filtering with the normal views arguments.
Now for my question. I have been reading views 2 documentation but not figuring out how to accomplish this custom argument. It doesn't seem to me like this should be as hard as it seems to me like it must be. Leaving aside any knowledge of the veiws api, it would seem that all I need is a callback function that will take the argument from the path as it's only argument and return a list of node id's to filter to.
Can anyone point me to a solution or give me some example code?
Thanks for your help! You guys are great.
PS. I am pretty sure that my design is the best I can come up with, lets don't get off my question and into cross checking my design logic if we can help it.
It's not as easy as you would like to make it.
In views, arguments are used to return objects, fx user, node, term, custom object. So you could make some custom code, to get the "query object". That would only be first step. You then need to get the info from the query object. You could either try making a custom relationship bond with the nodes or build your own filter to make the SQL needed. This can quickly become a confusing time sink.
Instead, I would suggest that you use hook_views_query_alter, which will allow you to alter the query. Since you already have the SQL, it's just a matter of checking for the hash, and if it's there, alter the query. Should be a pretty simple thing to do. Only thing that is a bit tricky, is that you have to make the query with the query object that views uses, but it's not that hard to figure out.

Resources