I'm working on a library app, and am using Firestore with the following (simplified) two collections books and wishes:
Book
- locationIds[] # Libraries where the book is in stock
Wish
- userId # User who has wishlisted a book
- bookId # Book that was wishlisted
The challenge: I would like to be able to make a query which gets a list of all Book IDs which have been wishlisted by a user AND are currently available in a library.
I can imagine two ways to solve this:
APPROACH 1
Copy the locationIds[] array to each Wish, containing the IDs of every location having a copy of that book.
My query would then be (pseudocode):
collection('wishes')
.where('userId' equals myUserId)
.where('locationIds' contains myLocationId)
But I expect my Wishes collection to be pretty large, and I don't like the idea of having to update the locationIds[] of all (maybe thousands) of wishes whenever a book's location changes.
APPROACH 2
Add a wishers[] array to each Book, containing the IDs of every user who has wishlisted it.
Then the query would look something like:
collection('books')
.where('locationIds' contains myLocationId)
.where('wishers' contains myUserId)
The problem with this is that the wishers array for a particular book may grow pretty huge (I'd like to support thousands of wishes on each book), and then this becomes a mess.
Help needed
In my opinion, neither of these approaches are ideal. If I had to pick one, I will probably go with Approach 1 simply because I don't want my Book object to contain such a huge array.
I'm sure I'm not the first person to come across this sort of problem, is there a better way?
You could try dividing the query in two different requests. For instance, in pseudocode:
wishes = db.collection('wishes').where('userId', '==', myUserId)
book_ids = [wish.bookId for wish in wishes]
books = db.collection('books').where('bookId', 'in', book_ids)
result = [book.bookId for book in books if book.locationIds]
Notice that this is just an example, this code probably doesn't work, since I haven't tested it and the keywork in just supports 10 values. But you get the idea. A good idea would be adding the length of the locationIds or whether it's empty or not in a separate attribute so you could omit the last iteration querying the books with:
books = db.collection('books').where('bookId', 'in', book_ids).where('hasLocations', '==', True)
Although you would still have to iterate to only get the bookId.
Also, you should avoid using arrays in Firestore since it doesn't have native support for them, as explained in their blog.
Is it mandatory to use NoSQL? Maybe you could do this M:M relation better in SQL. Bear in mind that I'm no database expert though.
Related
So I'm working on a real-time prediction matter, for example, I have a node (A) (:Person) and he has friends and node (B) as (:Games)
so node (A) has liked a certain Game and his friends liked other games so I recommend those other games for him But the matter is that I need to exclude the games which he is already liked or played.
it seems to be easy around the 'NOT' command but I couldn't find the right code for it yet although I've tried a lot of ways
the one seems closest for me is like:
match (A:Person)-[:Friend]-(n:Person)
where A <> n
with distinct n
match (n)-[:LIKED]-(B:Game)-[:ON]-(:steam), (k:Person{name:'John'})
where not ((k)-[:LIKED]-(:Game)-[:ON]-(:steam))
return B
which has to recommend the games John's friends liked without the games which John already liked.
anyway, when I Run this, the Graph just freezes for a while and then shutdown which is another problem I want to ask for.
Thanks for help
The last WHERE clause has very few constraints on it, and probably explains the hang/timeout. It may help to have a variable name for each label, either to constrain the query or to receive the nodes. more like this
where not ((k:Person{name:'John'})-[:LIKED]->(B:Game)-[:ON]->(C:steam))
return B
specify directional -> relationships (as above) in cypher queries if possible, usually it provides the answer you want, and is faster.
adding the variable name, and relationship direction also makes the query easier to read, to understand what it is doing, and if you need to look at the nodes/relationships values when debugging.
I may be wrong, but the :steam label doesn't look right to me. What are example values? I'm wondering if you meant to have a :service node, and steam would be a node instance?
Note: if you provide create nodes/rels script to create a small example of this database (e.g. a dozen nodes with these relationships) it would be easier to provide a working cypher example.
If you want to find the distinct games on Steam that John's friends liked but John has not yet liked or played, something this should work:
MATCH (j:Person{name:'John'})-[:FRIEND]-(:Person)-[:LIKED]->(g:Game)-[:ON]-(:steam)
WHERE NOT (j)-[:LIKED|PLAYED]->(g)
RETURN DISTINCT g
First of, I know how Firestore works and have spent a lot of time, evaluating different approaches for a good structure. Still I am considering following scenario:
There is a database of known recipes. Users can add recipes, but they have to be confirmed to be real recipes and not just some variations. So every user can choose receipes from the user-generated list of recipes to state, that they know how to cook them (or add new ones).
Now I want users to share their list of receipes with others, but this is where I am not sure how this can be best accomplished using Firestore. The trick is, that I want to show all the recipes at once, and don't want to paginate them.
I am currently evaluating two possibilities:
Subcollections
Whenever a user shares his list, the user looking at said list will have to load the entire list of the recipes which can result in a high amount of document reads (I suppose realistically ~50, in very rare cases maybe 1000).
Pros:
More natural structure
Easier to maintain (e.g. deleting a recipe, checking if a specific one exists)
Easier to add fields (e.g. timeOfCreation, comment, personalRating, ...)
Cons:
Can result in a high amount of reads on the long run
Arrays
I could save every known recipe (the id and an imageURL) inside the user's document (or as a single subdocument "KnownRecipes") within an array. This array could be in form of
recipesKnown: [{rid: 293ndwa, imageURL: image1.com, timeAdded: 8371201332},
{rid: 9012831, imageURL: image1.com, timeAdded: 8371201871},
{rid: jd812da, imageURL: image1.com, timeAdded: 8371201118},
...
]
Pros:
I only need one document read whenever someone wants to see another user's list
Reading a user's list is probably faster
Cons:
It's hard to update a specific recipe (e.g. someone wants to change the imageURL: I need to change the list locally and send the entire document as an update to the server - since I cannot just change a single element in the array)
When a user decides to have around 1000 recipes (this will maybe never happen, but it could), the 1MiB limit of the Firestore limit could be reached. A possible workaround would be to create a seperate document and split those two arrays into these two documents.
For me, the idea with Subcollections seems to be the more "clean" solution to this problem, but maybe I am missing some arguments on why one of those solutions would be superior over the other.
My most common queries are as follows (ordered descending by importance):
Which recipes can a user cook
Add a recipe a user can cook to the user's list
Who can cook a specific recipe (there is a Recipe -> Cooks subcollection)
Update an existing recipe a user can cook
The answer to your question depends on the level of scalability you want to achieve.
If by design the amount of sub-data you want to store is limited and very low, you should use arrays, since you reduce the number of document reads, which means lower costs.
If your sub-data is supposed to increase "unlimitedly" over time, you should use sub-collections.
If you're building a database which is not supposed to scale in any direction (Proof of concept, very small business, etc.) just go with what you feel more comfortable with.
I'm researching the same question...
One of the questions is whether the data held in the document will be ever go pass 1MB that is the limit for a document. Researching a bit on how much it can be held in plain text in 1MB well it's a hell of a lot. Still if it were to be incredible bigger it would crash in the end. Thus if you think in a big-big way sub-collections.
If we had to use the Firebase element logic the answer would be sub-collections.
Still I guess the major point is the data pulled. If you call the user you will directly be pulling out that MB of data. Instead with a sub-collection it won't load, even if you loaded it you can still lazy-load.
I guess for the kind of setup you are doing sub-collections.
key is an additional collection's con/pro
key could help to avoid duplicates; but this requires thinking of what is duplicate's definition (which might change);
array's no-key behavior could be emulated via auto-id.
p.s. #Thomas's list of pros/cons in the question has been quite helpful.
I'm trying to perform a filter by pattern over a Firestore collection. For exemple, in my Firestore database I have a brand called adidas. The user would have an search input, where typing "adi", "adid", "adida" or "adidas" returns the adidas document. I pointed out several solutions to do this :
1. Get all documents and perform a front-end filter
var brands = db.collection("brands");
filteredBrands = brands.filter((br) => br.name.includes("pattern"));
This solution is obviously not an option due to the Firestore pricing. Moreover it could be quite long to perform the request if the number of documents is high.
2. Use of Elasticsearch or Algolia
This could be interesting. However I think this is a bit overkill to add these solutions' support for only a pattern search, and also this can quickly become expensive.
3. Custom searchName field at object creation
So I had this solution : at document creation, create a field with an array of possible search patterns:
{
...
"name":"adidas",
"searchNames":[
"adi",
"adida",
"adidas"
],
...
}
so that the document could be accessed with :
filteredBrands = db.collection("brands").where("searchNames", "array-contains", "pattern");
So I had several questions:
What do you think about the pertinence and the efficiency of this 3rd solution? How far do you think this could be better than using a third party solution as Elasticsearch or Algolia?
Do you have any other idea for performing pattern filter over a firestore collection?
IMHO, the first solution is definitely not an option. Downloading an entire collection to search for fields client-side isn't practical at all and is also very costly.
The second option is the best option considering the fact that will help you enable full-text search in your entire Cloud Firestore database. It's up to you to decide if it is worth using it or not.
What do you think about the pertinence and the efficiency of this 3rd solution?
Regarding the third solution, it might work but it implies that you create an array of possible search patterns even if the brand name is very long. As I see in your schema, you are adding the possible search patterns starting from the 3rd letter, which means that if someone is searching for ad, no result will be found. The downside of this solution is the fact that if you have a brand named Asics Tiger and the user is searching for Tig or Tige, you'll end up having again no results.
Do you have any other ideas for performing pattern filters over a Firestore collection?
If you are interested to get results only from a single word and using as a pattern the staring letters of the brand, I recommend you a better solution which is using a query that looks like this:
var brands = db.collection("brands");
brands.orderBy("name").startAt(searchName).endAt(searchName + "\uf8ff")
In this case, a search like a or ad will work perfectly fine. Besides that, there will be no need to create any other arrays. So there will be less document writing.
I have also written an article called:
How to filter Firestore data cheaper?
That might also help.
I haven't exactly worked with SQL a lot, but apparently enough to be gripped by its claws, because I'm rather at a loss on how to refer to things in noSQL.
Let's say I have a page displaying books, and then a feature where users can save any amount of books to their own personal libraries ("liking" them, essentially).
A book comprises parameters like author, title, amount of pages, synopsis, list of characters, etc.
In SQL I would need to set up a join table between books and users, but in noSQL (and specifically Firebase because that's what I'm starting with) there's no such thing and that leaves me puzzled. I have been warned to nest data, but here it seems necessary to nest books under users (or vice versa?) and have a lot of duplicated data.
This seems icky to me for several reasons. One is that it would require a lot of extra time to iterate over users, because their entire library would need to be iterated over as well. Second, I would need to duplicate a lot of data, because I want to display the books in their library in full, along with the synopsis and character lists, etc.
Do I actually need to make another segment, called library, where I list users and nest their books under them? Is that a valid approach?
Or have I missed something essential here? Can I actually refer to data as with a foreign key even in noSQL (including Firebase)?
I ran into the same issues coming from a SQL background. The example structure below is a very high-level view of what your noSQL, in this case Firebase, may best look like:
- Firebase root
- Users
- $userId
- name
- email
- books
- $bookId
- Books
- $bookId
- title
- author
...and a very simple idea of how you'd access books:
var ref = new Firebase();
ref.child('Books').child($bookId).push({/* book info */});
In summary, those $userId and $bookId fields become very important to reference data throughout your app.
Consider a set of data called Library, which contains a set of Books and each book contains a set of Pages.
Let's say you are using Riak to store this data, and you need to be access the data in two possible ways:
- Query for a particular page (with a unique id)
- Query for all pages in a particular book (with a unique name)
Additionally, you need to be able to easily update and delete pages of a particular Book.
What would be the best way to accomplish this in Riak?
Obviously Riak Search will do the trick, but maybe is inefficient for what I am trying to do. I am wondering if it makes sense to set up buckets where each bucket can be a Book (which would make for potentially millions of "Book" buckets). Maybe that is a bad idea...
Can this be accomplished with secondary indexes?
I am trying to keep this simple...
I am new to Riak and I am trying to find the best way to accomplish something that is probably relatively simple. I would appreciate any help from the Stack Overflow community. Thanks!
A common way to model master-detail relationships in Riak is to have the master record contain a list of detail record IDs, possibly together with some information about the detail record that may be useful when deciding which detail records to retrieve.
In your example, you could have two buckets called 'books' and 'pages'. The master record in the 'books' bucket will contain metadata and information about the book as a whole together with a list of pages that are included in the book. Each page would contain the ID of the 'pages' record holding the page data as well as the corresponding page number. If you e.g. wanted to be able to query by chapter, you could also add information about which chapters a certain page belongs to.
The 'pages' bucket would contain the text of the page and possibly links to images and other media data that are included on that page. This data could be stored in yet another bucket.
In order to get a specific page or a range of pages, one would first retrieve the master record from the 'books' bucket and then based on the contents of the record the appropriate pages. Even though this requires several GET operations, they are all direct lookups based on keys, which is the most efficient and scalable way to retrieve data from Riak, so it is will perform and scale well.
This approach also makes it simple to change the order of pages and/or chapters as only the master record needs to be updated. Adding, deleting or modifying pages would however require both the master record as well as one or more detail records to be updated, added or deleted.
You can most certainly also solve this problem by adding secondary indexes to the objects and query based on this. Secondary index queries in Riak does however have to include processing on a covering set (generally ring size / n_val) of partitions in order to fulfil the request, and therefore puts a bit more load on the system and generally results in higher latencies than retrieving a single object containing keys through a direct key lookup (which only needs to involve the partitions where the object is actually stored).
Although maintaining a separate object containing indexes adds a bit of extra work when inserting or deleting pages/entries, this approach will generally result in more efficient reads, as only direct key lookups are required. If your application is heavy on reads, it probably makes sense to use this approach, while secondary indexes could be more efficient for a write heavy application as inserts and modifications are made cheaper at the expense of more expensive reads. You can however always add secondary indexes just in case in order to keep your options open.
In cases like this I would usually recommend performing some benchmarks to test the solutions and chech which solution that best matches you particular performance and scaling requirements.
The most efficient way will be to store hole book as an one object, and duplicate it's pages as another separate objects.
Pros:
you will be able to select any object by its key(the most cheapest op
in riak is kv query)
any query will be predicted by latency
this is natural way of storing for riak
Cons:
If you need to update any page you must update whole book, and then page. As riak doesn't have atomic ops, you must to think how to recover any failure situation (like this: book was updated, but page was not).
Riak is about availability predictable latency, so if you will use something like 2i to collect results, it will make unpredictable time query, which will grow with page numbers