Sails.js - Is there intended support for a "one-way-many" association - associations

I'm interested in a one-way-many association. To explain:
// Dog.js
module.exports = {
attributes: {
name: {
type: 'string'
},
favorateFoods: {
collection: 'food',
dominant: true
}
}
};
and
// Food.js
module.exports = {
attributes: {
name: {
type: 'string'
},
cost: {
type: 'integer'
}
}
};
In other words, I want a Dog to be associated w/ many Food entries, but as for Food, I don't care which Dog is associated.
If I actually implement the above, believe it or not it works. However, the table for the association is named in a very confusing manner - even more confusing than normal ;)
dog_favoritefoods__food_favoritefoods_food, with id, dog_favoritefoods, and food_favoritefoods_food.
REST blueprints function with the Dog model just fine, I don't see anything that "looks bad" except for the funky table name.
So, the question is, is it supposed to work this way, and does anyone see something that might potentially go haywire?

I think you should be ok.
However, there does not really seem any reason to not complete the association for a Many to Many. The reason would be because everything is already being created for that single collection. The join table and its attributes are already there. The only thing missing in this equation is the reference back on food.
I could understand if putting the association on food were to create another table or create another weird join, but that has already been done. There really is no overhead to creating the other association.
So in theory you might as well create it, thus avoiding any potential conflicts unless you have a really compelling reason not to?
Edited: Based on the comments below we should note that one could experience overhead in lift based the blueprints and dynamic finders created.

Related

How to normalizing data from the server in easy way in ngrx

I'm using ngrx/redux pattern in my app.
In Normalizing State Shape article, is it written that I should create a "table" for each object and link between them by an id.
for example:
posts = [{ id, author, comments: ["commentId1", "commentId2"....] }]
comments = [{ id: 'commentId1', comment: '..' } … ]
From my server side I get the object nested within,
posts: [ { id, author, comments: [ { id, comment } ] } ..]
So I need to write a code to refactor the object that match the Normalized State? for each arrays properties in my objects?
Is sound a big work to do. First, am I right I need to do that? Second, If so, there is a easy way to handle this?
I recently faced the same problem. I ended up using NGRX Entity with different states. In your case, one state for posts and one for comments. One could go further and normalize everything much more, but as you said it is a lot of work and I am not sure if it is worth it.
I found Todd Motto's tutorials to be really good: https://www.youtube.com/watch?v=al0LNgH3I4A
One way or another, you will still need a mapper that maps your server object into models you can use in your app. Different selectors can than help you to easily get the right comments for a given post.

Build File Tree of Cloud Storage in Firebase Firestore [duplicate]

What is a clean/efficient method for storing the directory Hierarchy/tree in a Key-Value database (in my case MongoDB but any of them)?
For example a tree structure
- Cars
+ Audi
+ BMW
- M5
+ Ford
- Color
+ Red
- Apple
- Cherry
+ Purple
- Funny
The method I am using now, each object links to it's parent
{
dir: "red"
parent-dir: "color"
}
This makes it very efficient/fast to insert and reorder any aspect of the tree (for example if I want to move Red and all it's children to the Cars directory).
But this method sucks when I want to all subdirectories and their children for a given directory recursively. To make it efficient to parse I can have a structure for example
{
dir: "red"
children: "audi, bmw, ford"
}
{
dir: "bmw"
children: "m5"
}
But if I want to modify the tree, a whole bunch of objects need to touched and modified.
Are there any other methods to storing a directory structure in a KV store?
The method you currently use now is called adjacency list model.
Another model to store hierarchical data in a (relational) database is the nested set model. Its implementation in SQL databases is well known. Also see this article for the modified preorder tree traversal algorithm.
A very simple method: you could store a path per object - with those it should be easy to query trees in NOSQL databases:
{ path: "Color", ... }
{ path: "Color.Red", ... }
{ path: "Color.Red.Apple", ... }
{ path: "Color.Red.Cherry", ... }
When nodes will be removed or renamed some paths must be updated. But in general, this method looks promising. You just have to reserve a special character as separator. The storage space overhead should be negligible.
edit: this method is called materialized path
Finally, here is a comparison of different methods for hierarchical data in NOSQL databases.
I don't have a huge amount of NOSQL experience, so this isn't a definitive answer, but here's how I'd approach it:
I would likely use your first approach, where you have:
{
dir: 'dir_name',
parent_dir: 'parent_dir_name'
}
And then set up a map-reduce to quickly query the children of a directory. MongoDB's map-reduce functionality is still only available in the development branch and I haven't worked with it yet, but in CouchDB (and I assume, with a few modification, in MongoDB) you could do something like:
map:
function(doc) {
emit( doc.parent_dir, doc.dir );
}
reduce:
function(key, values) {
return( values );
}
Which would give you the list of sub-directories for each parent directory.
I suggest storing a heap to the the id's of the data items.
I think this is the best plan. If you need lots and lots of stuff any heap element could be an index to another heap.
eg
{ "id:xxx", "id:yyy", "sub-heap-id:zzz"....}
If this is not clear post a comment and I will explain more when I get home.
Make an index!
http://www.mongodb.org/display/DOCS/Indexes

Apollo / GraphQl - Type must be Input type

Reaching to you all as I am in the learning process and integration of Apollo and graphQL into one of my projects. So far it goes ok but now I am trying to have some mutations and I am struggling with the Input type and Query type. I feel like it's way more complicated than it should be and therefore I am looking for advice on how I should manage my situation. Examples I found online are always with very basic Schemas but the reality is always more complex as my Schema is quite big and look as follow (I'll copy just a part):
type Calculation {
_id: String!
userId: String!
data: CalculationData
lastUpdated: Int
name: String
}
type CalculationData {
Loads: [Load]
validated: Boolean
x: Float
y: Float
z: Float
Inputs: [Input]
metric: Boolean
}
Then Inputs and Loads are defined, and so on...
For this I want a mutation to save the "Calculation", so in the same file I have this:
type Mutation {
saveCalculation(data: CalculationData!, name: String!): Calculation
}
My resolver is as follow:
export default resolvers = {
Mutation: {
saveCalculation(obj, args, context) {
if(context.user && context.user._id){
const calculationId = Calculations.insert({
userId: context.user._id,
data: args.data,
name: args.name
})
return Calculations.findOne({ _id: calculationId})
}
throw new Error('Need an account to save a calculation')
}
}
}
Then my mutation is the following :
import gql from 'graphql-tag';
export const SAVE_CALCULATION = gql`
mutation saveCalculation($data: CalculationData!, $name: String!){
saveCalculation(data: $data, name: $name){
_id
}
}
`
Finally I am using the Mutation component to try to save the data:
<Mutation mutation={SAVE_CALCULATION}>
{(saveCalculation, { data }) => (
<div onClick={() => saveCalculation({ variables : { data: this.state, name:'name calcul' }})}>SAVE</div>
}}
</Mutation>
Now I get the following error :
[GraphQL error]: Message: The type of Mutation.saveCalculation(data:)
must be Input Type but got: CalculationData!., Location: undefined,
Path: undefined
From my research and some other SO posts, I get that I should define Input type in addition to the Query type but Input type can only avec Scalar types but my schema depends on other schemas (and that is not scalar). Can I create Input types depending on other Input types and so on when the last one has only scalar types? I am kinda lost cause it seems like a lot of redundancy. Would very much appreciate some guidance on the best practice. I am convinced Apollo/graphql could bring me quite good help over time on my project but I have to admit it is more complicated than I thought to implement it when the Schemas are a bit complex. Online examples generally stick to a String and a Boolean.
From the spec:
Fields may accept arguments to configure their behavior. These inputs are often scalars or enums, but they sometimes need to represent more complex values.
A GraphQL Input Object defines a set of input fields; the input fields are either scalars, enums, or other input objects. This allows arguments to accept arbitrarily complex structs.
In other words, you can't use regular GraphQLObjectTypes as the type for an GraphQLInputObjectType field -- you must use another GraphQLInputObjectType.
When you write out your schema using SDL, it may seem redundant to have to create a Load type and a LoadInput input, especially if they have the same fields. However, under the hood, the types and inputs you define are turned into very different classes of object, each with different properties and methods. There is functionality that is specific to a GraphQLObjectType (like accepting arguments) that doesn't exist on an GraphQLInputObjectType -- and vice versa.
Trying to use in place of another is kind of like trying to put a square peg in a round hole. "I don't know why I need a circle. I have a square. They both have a diameter. Why do I need both?"
Outside of that, there's a good practical reason to keep types and inputs separate. That's because in plenty of scenarios, you will expose plenty of fields on the type that you won't expose on the input.
For example, your type might include derived fields that are actually a combination of the underlying data. Or it might include fields to relationships with other data (like a friends field on a User). In both these case, it wouldn't make sense to make these fields part of the data that's submitted as as argument for some field. Likewise, you might have some input field that you wouldn't want to expose on its type counterpart (a password field comes to mind).
Yes, you can:
The fields on an input object type can themselves refer to input object types, but you can't mix input and output types in your schema. Input object types also can't have arguments on their fields.
Input types are meant to be defined in addition to normal types. Usually they'll have some differences, eg input won't have an id or createdAt field.

Should I use AutoValue to store aggregated values of a collection?

I have a Comments collection and a Page collection. Comments belong to pages. Users can upvote the comments, and I want to display the aggregated sum of all the votes of the comments belonging to a page. What would be a good way to do this?
I was thinking of keeping the sum as an AutoValue inside the page collection. Would there be a way to occasionally trigger a recalculation of the AutoValue? I don't need the sum to be updated realtime, once every 5 minutes would suffice.
Or is this a bad idea? Would it be better to use a ReactiveVar in the template to do the calculation, or something else?
Edit: There's not much special about the setup, really. Simply a comment collection with a numeric 'votes' attribute and a pages collection with a numeric autovalue 'score' that should count the votes.
The pages:
Collections.Pages = new Mongo.Collection("pages");
var PageSchema = new SimpleSchema({
name: {
type: String,
min: 1
},
score: {
type: Number,
autoValue: function (doc) {
var maxValue = 1;
Collections.Comments.find({ pageId: doc.pageId }).map(function(mapDoc){
maxValue += mapDoc.votes;
});
return maxValue;
}
},
The comments:
Collections.Comments = new Mongo.Collection("comments");
var CommentSchema = new SimpleSchema({
pageId: {
type: String
},
name: {
type: String,
optional: true
},
votes: {
type: Number,
label: 'Total Votes',
defaultValue: 0
},
Maybe an alternative approach to periodic/timed recalculations might be to simply recalculate the value in one collection in response to a change in the other collection. You said you don't need realtime, but I don't imagine you'd mind if it was realtime.
I had a similar challenge and used the Meteor Collection Hooks package (see https://github.com/matb33/meteor-collection-hooks).
Example:
Collection.comments.after.update(function(userId, doc) {
// make update to aggregated value in Collections.pages
});
i did something similar: i had News items with Comments, and i wanted to track the number of comments per news item w/o having to publish all the Comments.
i chose to give News a commentCount field. i had methods for adding and removing comments, and as part of that processing, i looked up the associated News item and incremented or decremented its count.
what you're finding with your schema solution is that there's no clear way to trigger the autoValue. (it's an interesting use of autoValue, btw, i'll have to keep that in mind for future use).
so i think you're left with these choices:
create upvote/downvote methods for the votes. in the method handler, do the calculations for total votes and store the updated value along with post. this is similar to what i did with News/Comments.
as David suggested, use collection hooks to do something similar to #1. though i do use collection hooks, it's usually when i don't have a clear hook into what i want to do, it's more of a catchall, or processing driven off something i don't totally control.
take care of it in the publish. when you publish the Page, also look up the vote count and simply add dynamically to the publish object. Note that this won't republish the Page when the votes change, so you would lose that reactivity; you did indicate that you were ok with periodic updates.
getting that updated would be a little tricky, because you would have to force the publisher to run again. e.g. through unsubscribing and resubscribing.
of those 3, based on what i understand of your problem, i like them in the order presented. #3 feels the least viable, but i mention it in case it fits in w/ something else you're doing.

Should fields be added sparingly or generously in a GraphQL API?

This is a general question, I'm making an example just to better illustrate what I mean.
Assume I have a User model, and a Tournament model, and that the Tournament model has a key/value map of user ids and their scores. When exposing this as a GraphQL API, I could expose it more or less directly like so:
Schema {
tournament: {
scores: [{
user: User
score: Number
}]
}
user($id: ID) {
id: ID
name: String
}
}
This gives access to all the data. However, in many cases it might be useful to get a user's scores in a certain tournament, or going from the tournament, get a certain user's scores. In other words, there are many edges that seem handy that I could add:
Schema {
tournament: {
scores: [{
user: User
score: Number
}]
userScore($userID: ID): Number # New edge!
}
user($id: ID) {
id: ID
name: String
tournamentScore($tournamentID: ID): Number # New edge!
}
}
This would probably be more practical to consume for the client, covering more use cases in a handy way. On the other hand the more I expose, the more I have to maintain.
My question is: In general, is it better to be "generous" and expose many edges between nodes where applicable (because it makes it easier for the client), or is it better to code sparingly and only expose as much as needed to get the data (because it will be less to maintain)?
Of course, in this trivial example it won't make much difference either way, but I feel like these might be important questions when designing larger API's.
I could write it as a comment but I can't help emphasizing the following point as an answer:
Always always follow YAGNI principle. The less to maintain, the better. A good API design is not about how large it is, it's about how good it meets the needs, how easy it is to use.
You can always add the new fields (what you call edge in your example) later when you need them. KISS is good.
Or you could do this
Schema {
tournament: {
scores(user_ids: [ID]): [{
user: User
score: Number
}]
}
user($id: ID) {
id: ID
name: String
tournaments(tournament_ids: [ID]): [{
tournament: Tournament
score: Number
}]
}
}
and since user_ids and tournament_ids are not mandatory, a user can make the decision to get all edges, some, or one.

Resources