I am currently using NgRx Data to perform CRUD operation on couple of entities on my project. Now, I've to develop pagination. Hence, REST API response is going to be like:
{
"page": 1,
"per_page": 10,
"total": 100,
"total_page": 10,
"data": [
{ ... },
{ ... },
{ ... }
]
}
AFAIK, NgRx Data works well with entities, I've no clue how to deal with this. Could you please redirect me to some light? Thank you.
Even I was facing a similar issue. So for people who are new to NgRx data, I have created an EntityDataListInterface which was similar to:
{
page: number,
per_page: number,
total: number,
total_page: number,
data: EntityDataItem[]
}
For each section I am working on I create a different service. Lets call it ComponentService. Inside this ComponentService I access the EntityService(which implements EntityCollectionServiceBase<EntityDataItem>) and entity's DataService (which implements DefaultDataService<EntityDataListInterface>).
Once the API returns EntityDataListInterface data, you can use addManyToCache to add them into the entity cache.
Inside the module, register the EntityDataItem by passing the filterFn. Now you can call setFilter to filter the entities based on indexes(or any pagination logic like shown below) and the result would be accessible via filteredEntities$.
//eds: EntityDefinitionService in the constructor
const entityMetadata: EntityMetadataMap = {
EntityDataItem: {
filterFn:(entities: EntityDataItem[], pattern:{startIndex: number, endIndex: number}) => {
return entities.filter((entity, index) => {
return ((index >= pattern.startIndex) && (index <= pattern.endIndex));
})
}
}
};
eds.registerMetadataMap(entityMetadata);
Subscribe to filteredEntities$ in your component and it will solve the pagination issue.
Related
Currently, I'm building an app with with following similar logic:
...
const user = {
isAdmin: true,
company: '5faa6a847b42bf47b8f785a1',
projects: ['5faa6a847b42bf47b8f785a2']
}
function defineAbilityForUser(user) {
return defineAbility((can) => {
if (user.isAdmin) {
can('create', 'ProjectTime', {
company: user.company,
}
);
}
can(
'create',
'ProjectTime',
["company", "project", "user", "start", "end"],
{
company: user.company,
project: {
$in: user.projects
}
}
);
});
}
const userAbility = defineAbilityForUser(user); //
console.log( permittedFieldsOf(userAbility, 'create', 'ProjectTime') );
// console output: ['company', 'project', 'user', 'start', 'end']
Basically an admin should be allowed to create a project time with no field restrictions.
And a none admin user should only be allowed to set the specified fields for projects to which he belongs.
The problem is that I would expect to get [] as output because an admin should be allowed to set all fields for a project time.
The only solution I found was to set all fields on the admin user condition. But this requires a lot of migration work later when new fields are added to the project time model. (also wrapping the second condition in an else-block is not possible in my case)
Is there any other better way to do this? Or maybe, would it be better if the permittedFieldsOf-function would prioritize the condition with no field restrictions?
There is actually no way for casl to know what means all fields in context of your models. It knows almost nothing about their shapes and relies on conditions you provide it to check that objects later. So, it does not have full information.
What you need to do is to pass the 4th argument to override fieldsFrom callback. Check the api docs and reference implementation in #casl/mongoose
In casl v5, that parameter is mandatory. So, this confusion will disappear very soon
I am currently using ReactiveAggregate to find a subset of Product data, like this:
ReactiveAggregate(this, Products, [
{ $match: {}},
{ $project: {
title: true,
image: true,
variants: {
$filter: {
input: "$variants",
as: "variant",
cond: {
$setIsSubset: [['$$variant.id'], user.variantFollowing]
}
}
}
}}
], { clientCollection: 'aggregateVariants' }
As you can see, a variant is returned if user.variantFollowing matches. When a user 'follows' a product, the ID is added to their object. However, if I understand correctly, this is not triggering ReactiveAggregate to get the new subset when this happens. Only on a full page refresh do I get the correct (latest) data.
Is this the correct way to approach this?
I could store the user's ID as part of the Product object, but the way this would be stored would be nested two places, and I think I would need the Mongo 3.5 updates to then be able to accurately update this. So i'm looking for how to do this in Meteor 1.5+ / Mongo 3.2.12
So, I've been able to get there by adding autorun to the subscription of the aggregate collection, like this:
Template.followedProducts.onCreated(function() {
Meteor.subscribe('products');
this.autorun(() => {
Meteor.subscribe('productsFollowed');
});
... rest of function
For context, productsFollowed is the subscription to retrieve aggregateVariants from the original question.
Thanks to robfallows in this post: https://forums.meteor.com/t/when-and-how-to-use-this-autorun/26075/6
I have a JSON object similar to this in the redux store of my application:
tables: [
{
"id":"TableGroup1",
"objs":[
{"tableName":"Table1","fetchURL":"www.mybackend.[....]/get/table1"},
{"tableName":"Table2","fetchURL":"www.mybackend.[....]/get/table2"},
{"tableName":"Table3","fetchURL":"www.mybackend.[....]/get/table3"}
]
},{
"id":"TableGroup2",
"objs":[
{"tableName":"Table4","fetchURL":"www.mybackend.[....]/get/table4"},
{"tableName":"Table5","fetchURL":"www.mybackend.[....]/get/table5"},
{"tableName":"Table6","fetchURL":"www.mybackend.[....]/get/table6"}
]
}
];
To load it, i use the following call (TableApi is a mock api loaded locally, beginAjaxCalls keeps track of how many Ajax calls are currently active);
export function loadTables(){
return function(dispatch,getState){
dispatch(beginAjaxCall());
return TableApi.getAllTables().then(tables => {
dispatch(loadTablesSuccess(tables));
}).then(()=>{
//Looping through the store to execute sub requests
}).catch(error => {
throw(error);
});
};
}
I then want to loop through my tables, call the different URLs and populate a new field called data so that an object after a call looks like this;
{"tableName":"Table1","fetchURL":"www.mybackend.[....]/get/table1","data":[{key:"...",value:"..."},{key:"...",value:"..."},{key:"...",value:"..."},.....]}
The data will be frequently updated by recalling the fetch url, and the table should then re-render in the view.
Which leads me to my questions:
- Is this architecturally sound?
- How would redux handle frequent changes? (because of immutability, will i get performance issues by frequently deep copying a table instance with 10,000+ data entries)
And more importantly, what code could i place to substitute the comment so that it serves its intended purpose? Ive tried;
let i;
for(i in getState().tables){
let d;
for(d in getState().tables[i].objs){
dispatch(loadDataForTable(d,i));
}
}
This code, however doesn't seem like the best implementation and I get errors.
Any suggestions are welcome, thanks!
First of all, you don't need to make a deep copy of all tables.
For sake of immutability you need to copy only changed items.
For your data structure it would look like this:
function updateTables(tables, table) {
return tables.map(tableGroup => {
if(tableGroup.objs.find(obj => table.tableName === obj.tableName)) {
// if the table is here, copy group
retrun updateTableGroup(tableGroup, table);
} else {
// otherwise leave it unchanged
return tableGroup;
}
})
}
function updateTableGroup(tableGroup, table) {
return {
...tableGroup,
objs: tableGroup.objs.map(obj => {
return table.tableName === obj.tableName ? table : obj;
})
};
}
Is it possible to do something like "filtered subscription" in Meteor: for example if you have a filter on month june and switching to july fetches the new data and subscribes to it?
i tried something like:
Meteor.publish("report", function (query, opt) {
return Report.find({ 'timestamp' : { $gte : query.from, $lt: query.to }}, options);
}
on client with iron router:
HomeController=RouteController.extend({
template:"home",
waitOn:function(){
var dates = getDates();
return Meteor.subscribe("report", dates);
},
fastRender: true
});
but it does not work.
Is there a better method to dynamically subscribe? Or does it just help to navigate with url pattern?
thanks
Is there a better method to dynamically subscribe?
There is an alternative method using template subscriptions, example below. I don't think it's better, just different.
Or does it just help to navigate with url pattern?
If you want to handle the subscriptions in the Router, then storing the subscription query params in the URL does help and has some added benefits in my opinion. But it depends on your desired app behavior.
Using Template Subscriptions approach :
This Meteor Pad example will subscribe to a range of data based on a select :
http://meteorpad.com/pad/26dd8YQevBbA5uNGA/Dynamic%20Subscription
Using Iron Router approach :
This route example will subscribe based on the URL . "items/0/10" will subscribe to the itemData with a range of zero to 10.
Router.route('Items', {
name:'Items',
path:'items/:low/:high',
subscriptions : function(){
var low = parseInt(this.params.low);
var high = parseInt(this.params.high);
return [
Meteor.subscribe("itemData",low,high),
];
},
action: function () {
if (this.ready()) {
this.render();
} else {
this.render('Loading');
}
}
});
I think either approach is fine and depends on your interface. Using the URL is nice because you can provide links directly to the range of data, use forward and back buttons in browser, good for paging lists of data.
The template subscriptions approach might be appropriate to change the data on a graph.
The specific issue you are having might be due to the fact that your getDates() is not reactive, so the subscription is only run once when the route waitOn is first run.
Meteor Collections have a transform ability that allows behavior to be attached to the objects returned from mongo.
We want to have autopublish turned off so the client does not have access to the database collections, but we still want the transform functionality.
We are sending data to the client with a more explicit Meteor.publish/Meteor.subscribe or the RPC mechanism ( Meteor.call()/Meteor.methods() )
How can we have the Meteor client automatically apply a transform like it will when retrieving data directly with the Meteor.Collection methods?
While you can't directly use transforms, there is a way to transform the result of a database query before publishing it. This is what the "publish the current size of a collection" example describes here.
It took me a while to figure out a really simple application of that, so maybe my code will help you, too:
Meteor.publish("publicationsWithHTML", function (data) {
var self = this;
Publications
.find()
.forEach(function(entry) {
addSomeHTML(entry); // this function changes the content of entry
self.added("publications", entry._id, entry);
});
self.ready();
});
On the client you subscribe to this:
Meteor.subscribe("publicationsWithHTML");
But your model still need to create a collection (on both sides) that is called 'publications':
Publications = new Meteor.Collection('publications');
Mind you, this is not a very good example, as it doesn't maintain the reactivity. But I found the count example a bit confusing at first, so maybe you'll find it helpful.
(Meteor 0.7.0.1) - meteor does allow behavior to be attached to the objects returned via the pub/sub.
This is from a pull request I submitted to the meteor project.
Todos = new Meteor.Collection('todos', {
// transform allows behavior to be attached to the objects returned via the pub/sub communication.
transform : function(todo) {
todo.update = function(change) {
Meteor.call('Todos_update', this._id, change);
},
todo.remove = function() {
Meteor.call('Todos_remove', this._id);
}
return todo;
}
});
todosHandle = Meteor.subscribe('todos');
Any objects returned via the 'todos' topic will have the update() and the remove() function - which is exactly what I want: I now attach behavior to the returned data.
Try:
let transformTodo = (fields) => {
fields._pubType = 'todos';
return fields;
};
Meteor.publish('todos', function() {
let subHandle = Todos
.find()
.observeChanges({
added: (id, fields) => {
fields = transformTodo(fields);
this.added('todos', id, fields);
},
changed: (id, fields) => {
fields = transformTodo(fields);
this.changed('todos', id, fields);
},
removed: (id) => {
this.removed('todos', id);
}
});
this.ready();
this.onStop(() => {
subHandle.stop();
});
});
Currently, you can't apply transforms on the server to published collections. See this question for more details. That leaves you with either transforming the data on the client, or using a meteor method. In a method, you can have the server do whatever you want to the data.
In one of my projects, we perform our most expensive query (it joins several collections, denormalizes the documents, and trims unnecessary fields) via a method call. It isn't reactive, but it greatly simplifies our code because all of the transformation happens on the server.
To extend #Christian Fritz answer, with Reactive Solution using peerlibrary:reactive-publish
Meteor.publish("todos", function() {
const self = this;
return this.autorun(function(computation) {
// Loop over each document in collection
todo.find().forEach(function(entry) {
// Add function to transform / modify each document here
self.added("todos", entry._id, entry);
});
});
});