Prepare Query for Search Multiple Collections using Meteor? - meteor

In my application 2 collections are there.
1.profile - Fields are : username,id,city and postalcode
2.material - id,version,descrption and usage
In single collection we use as shown below :
var query = {};
query.username = 'xxxx';
query.city= 'yyyy';
Need Multiple Collections Query as shown below data?
Need query for to pass username and city in profile and descrption in material?

MongoDB does not support joins in the way a relational DB (e.g., MySQL) does. There are a number of materials available on the subject, principally...
Discover Meteor's tutorial on reactive joins
Reactive and Non-Reactive Join with MongoDB
This HackPad about reactive join packages
What these all boil down to is this: It's not native in MongoDB, but with Meteor, you have three options:
Use a package to do it for you
De-normalize (include one collection entirely within the other)
Do it yourself with publication and routing
From your comment:
In Profile Id is there and material are also id there.So these two are same
it sounds like the id fields are what you'd like to join by -- the second option (de-normalize) here is really easy, if you can get away with it. Rather than having two Collections, have one Collection in the following format:
{
username,
material: {
version,
description,
usage
},
city,
postalcode
}
and then query like so:
Profiles.find({username: 'xxxx', city: 'yyyy', "material.description": "zzzz"});
This has obvious tradeoffs in terms of storage efficiency and re-usability (for instance, if your material Collection is used elsewhere, you can't really just embed it in the profile Collection). Note that this also works if you have many materials that have the same description.
If you need to keep them separate, you can narrow down the set of fields to look in manually (option 3) -- you can see how I did that on this MeteorPad, but the basic idea is in your publish function, query one Collection first, then use that information to query the other:
// Return all profiles with a certain material description -- includes multiple materials.
Meteor.publish('profiles', function(username, city, material) {
var materialIds = Materials.find({description: material}).map(function(mat) { return mat._id });
return Profiles.find({material: {$in: materialIds}, username: username, city: city});
});
There are a lot of choices for packages, so I will leave it up to you to find the one most suited to your needs and read the documentation if that's the route you choose.

Related

How to combine multiple firebase docs to get a combined result?

In my firebase db I have 3 collections:
Users
{user_id}: {name: "John Smith"}
Items
{item_id}: {value: 12345}
Actions
{action_id}: {action: "example", user: {user_id}, items:{item_id}}
Basically, instead of storing the Users and Items under the Actions Collection, I just keep an ID. But now I need a list of all actions and this also needs info from the Users and Items Collection. How can I efficiently query firebase so I can get a result that looks like this:
{
action: "example",
user: {
name: "John Smith"
},
item: {
value: 1234
}
}
Unfortunately, there is no such thing in firebase or a similar database, basically, you are looking for a traditional join, which is no recommended thing to do in a NoSQL database.
If you want to do it in firebase, you will need:
Get the element you are looking for from your main collection Actions in this case.
Then you need to do another call to the Items collections where item_id == action.item_id.
Then assign in the actions["Item"] = item_gotten.
This is not a recommended use as I said, usually, when you are using a NoSQL Database you are expecting a denormalize structure, from your application you need to save the whole Item, in the Action JSON, and also in the Item. Yes, you will have duplicate data but this is fine for this kind of model. also you shouldn't expect too many changes in one specific object within your whole object key If you are managing a big set of changes you could be using the incorrect kind of DB.
For aggregation queries reference, you might check: https://firebase.google.com/docs/firestore/solutions/aggregation

Organizing a Cloud Firestore database

I can't manage to determine what is the better way of organizing my database for my app :
My users can create items identified by a unique ID.
The queries I need :
- Query 1: Get all the items created by a user
- Query 2 : From the UID of an item, get its creator
My database is organized as following :
Users database
user1 : {
item1_uid,
item2_uid
},
user2 : {
item3_uid
}
Items database
item1_uid : {
title,
description
},
item2_uid : {
title,
description
},
item3_uid : {
title,
description
}
For the query 2, its quite simple but for the query 2, I need to parse all the users database and list all the items Id to see if there is the one I am looking for. It works right now but I'm afraid that it will slow the request time as the database grows.
Should I add in the items data a row with the user id ? If yes the query will be simpler but I heard that I am not supposed to have twice the same data in the database because it can lead to conflicts when adding or removing items.
Should I add in the items data a row with the user id ?
Yes, this is a very common approach in the NoSQL world and is called denormalization. Denormalization is described, in this "famous" post about NoSQL data modeling, as "copying of the same data into multiple documents in order to simplify/optimize query processing or to fit the user’s data into a particular data model". In other words, the main driver of your data model design is the queries you plan to execute.
More concretely you could have an extra field in your item documents, which contain the ID of the creator. You could even have another one with, e.g., the name of the creator: This way, in one query, you can display the items and their creators.
Now, for maintaining these different documents in sync (for example, if you change the name of one user, you want it to be updated in the corresponding items), you can either use a Batched Write to modify several documents in one atomic operation, or rely on one or more Cloud Functions that would detect the changes of the user documents and reflect them in the item documents.

firestore: representing a relationship

In firestore i have a collection called things.
Each thing is owned by a user.
Each thing can be shared by the owner with other specified users.
the structure of thing looks something like
{
id: "thing01",
sharedWith: {
"user1": true,
"user2": true,
},
dtCreated: 3458973948
}
When I want to retrieve all thing objects that are shared with user1, ordered by dtCreated desc,
i can't do this without having to create an index on things.thing.user1
i.e. for every unique userid i have to create an index on the things collection.
Obviously this is not practical. The docs talk about using full text search for this, but this doesn't seem like a problem we would want to use full text search for.
Is there a different way i should be structuring the data to achieve what i want?
Is firestore just the wrong technology choice for this?
It's working very well for storing the thing objects themselves.
---- update ----
this question is not a real duplicate of Firestore: Working with nested single queries because the answer provided there is very specific to the OP's context.

Relationships across multiple Realms?

Realm:
We have the following scenario: There are several stores with employees and customers, several employees that could work at more than one store, and several customers that may shop at several stores. This could be represented with these classes
class Store {
dynamic var id = ""
dynamic var address = ""
let workers = List<Employee>()
let customers = List<Customer>()
}
class Customer {
dynamic var id = ""
dynamic var name = ""
let stores = LinkingObjects(fromType: Store.self, property: "customers")
// ... many more fields about this customer
}
class Employee {
var id
var name
let work = LinkingObjects(fromType: Store.self, property: "workers")
}
The catch here is that we must protect customer information, so none of the customer info can be present in a shared realm and needs to be secure. Neither the store nor Employee data is a security matter. Our current approach is to give each customer their own realm, however, the major drawbacks to this is that requires massive duplication since each customer realm must copy the data of the store. The other drawback is that we would be copying customer data into a shared realm which is a security risk. What would be the best way to architect this scenario that allows for relationships across different Realms?
Realm doesn't currently support "direct" object links across Realms analogous to object properties within the same Realm.
Instead, what I suggest you do is to give your objects primary keys (you can probably just declare your existing id fields as such, or create a new internalId field if your existing id field can't be used for this purpose).
Primary keys are mandatory, must be unique, and can't be changed after they are set, which makes them great for uniquely identifying objects. Our documentation discusses them in greater detail.
Then, instead of directly storing customer info/a customer object in a shared Realm, you can just store the primary keys for the relevant customers, for example in a list. (Right now you'll have to make a wrapper CustomerKey object for example to store the customer's primary key, but we plan to support collections directly containing strings or other primitive types very soon.)
You can enhance this further by adding helper methods on your objects that can be passed in a customer Realm and return the user object (or whatever object's primary key is being stored), looking it up in the Realm automatically. You can use Realm's object(ofType:forPrimaryKey) method to look up an object based on its primary key.
The main limitation is that you won't get the automatic updating of links you would get with object, list, and LinkingObjects properties. You'll have to manually perform the bookkeeping yourself.
If you have ideas for functionality you want to see in Realm that would go beyond what I've posted here, feel free to share your thoughts at our GitHub issue tracker. We welcome feature requests.

How to entirely skip validation in simple schema and allow incomplete documents to be stored?

I'm creating an order form and a schema defined for an Order (certain required fields such as address, customer info, items selected and their quantities, etc).
a. User visits site.
b. A unique ID is generated for their session as well as a timestamp.
var userSession = {
_id: createId(),
timestamp: new Date(),
};
var sessionId = userSession._id;
c. The userSession is placed in local storage.
storeInLocalStorage('blahblah', sessionObject);
d. An Order object is created with the sessionId as the only field so far.
var newOrder = {
sessionId: sessionId;
};
e. Obviously at this point the Order object won't validate according to the schema so I can't store it in Mongo. BUT I still want to store it in Mongo so I can later retrieve incomplete orders, or orders in progress, using the sessionID generated on the user's initial visit.
This won't work because it fails validation:
Orders.insert(newOrder);
f. When a user revisits the site I want to be able to get the incomplete order from Mongo and resume:
var sessionId = getLocalStorage('blahblah')._id;
var incompleteOrder = Orders.findOne({'sessionId', sessionId});
So I'm not sure how to go about doing this while accomplishing these points.
I want full simpleschema validation on the Orders collection when the user is entering in items on the forms and when the user is intending to submit a full, complete order.
I want to disable simpleschema validation on the Orders collection and still allow storing into the DB so that partial orders can be stored for resumption at a later time.
I can make a field conditionally required using this here but that would mean 50+ fields would be conditionally required just for this scenario and that seems super cumbersome.
It sounds like you want to have your cake, and eat it too!
I think the best approach here would be keep your schema and validation on the Orders collection, but store incomplete orders elsewhere.
You could store them in another collection (with a more relaxed schema) if you want them on the server (possibly for enabling resume on another device for the logged in user) , or more simply in Local Storage, and still enable the resume previous order behaviour you are wanting.
Only write to the Orders collection when the order is complete (and passes validation).
Here's a variation on #JeremyK's answer: add an inProgress key to your order of type [Object]. This object would have no deeper validation. Keep your in progress order data in there until the order is final then copy/move all the relevant data into the permanent keys and remove the inProgress key. This would require that you make all the real keys optional of course. The advantage is that the object would maintain its primary key throughout the life cycle.
I think this particular case has been solved; but just in case, you can skip Simple Schemma validations by accessing MongoDB native API via Collection#rawCollection():
Orders.rawCollection().insert(newOrder);
While this question is very old in the meantime there is a better solution. You probably use simple schema together with collection2. Collection2 has the ability to set multiple schemas based on a selector and then validate against the correct schema based on it.
https://github.com/Meteor-Community-Packages/meteor-collection2#attaching-multiple-schemas-to-the-same-collection
e.g. you could have a selector {state: 'finished'} and only apply the full schema to these documents while having another selctor, e.g. {state: 'in-progress'} for unfinished orders with a schema with optional fields.

Resources