I have streams with relations with categories and users, but those relations are optional relations. So streams should not always be related to a category or user, it may just related to a category or multiple categories or users. My question is I want to join two optional queries which handle all those cases. Example below, behaves like this, it works if both category and user relations exists for node, but does not work if node has only a user relation in example.Any ideas?
MATCH (stream:Stream {id: "xyz123"})
MATCH (stream)-[:CONTAINS]->(categories)-[:CHILD_OF*0..50]->(subcats)<-[:PHOTO_OF]-(photo), (stream)<-[:PARTICIPANT_OF]-(users)<-[:OWNER]-(photo)
WHERE photo.is_private=false
return collect(photo.id) as photo_ids
What i need is intersection of two matchings if both user and category relations exists. if only category or user relations exists bring only the result of that relation
Does this work for you?
MATCH (stream:Stream { id: "xyz123" })
OPTIONAL MATCH (stream)-[:CONTAINS]->(categories)-[:CHILD_OF*0..50]->(subcats)<-[:PHOTO_OF]-(photo)
WHERE photo.is_private=false
WITH stream, COLLECT(photo) AS p1
OPTIONAL MATCH (stream)<-[:PARTICIPANT_OF]-(users)<-[:OWNER]-(photo)
WHERE photo.is_private=false
WITH stream, p1 + COLLECT(photo) AS pCombined
UNWIND pCombined AS photo
RETURN COLLECT(DISTINCT photo.id) AS photo_ids
Here is a console that shows this query working.
Related
We are implementing a custom REST endpoint in magnolia that queries for pages and it should be able to filter by categories. I found the JCR Query Cheat Sheet, but I could not see an example about filtering a multivalued field. In our case, we have a categories field with an array of category UUIDs and I want all pages having a certain category uuid. It works with using a like query (lets say 123-456 is a uuid) such as:
"SELECT * FROM [mgnl:page] p where p.[categories] like '%123-456%'
Is there a better approach without using (possible slow) LIKE queries to explicitly check for an intersection with the categories array? Are there any SET/ARRAY functions to use in where conditions for such filtering?
In case of multifield values, you can put multiple WHERE statements like so:
SELECT * FROM [mgnl:content] AS p
WHERE p.[categories] = '415025c6-e4b5-4506-9384-34f428a52104'
AND p.[categories] = 'e007e401-1bf8-4658-8293-b9c743784264'
This will return nodes in which categories (multivalue) property contains both IDs.
I have a collection which represents a list of available sport matches (see image below, sorry for the italian text).
Each document is a match, and has a list of players which are subscribed to that match (id_player1, id_player2, etc).
When someone would like to subscribe to that match, I have to cycle through the players_id, and when I find a null one, I set it to the user's id.
So my questions are:
how can I cycle through the fields of the document and check if they are null or not?
how can I count how many fields are not null, so when this count is equal to X, I do something?
You decide to define 6 different fields to store players id.. so u cannot cycle that fields.. what you can do is to get all of the six fileds and check one by one if they are null...
what you should do is to refactor that logic and store players id in a collection.. an update the collection only if its count is under 6 so you haven't t check if you have any space left to add player id
Bye :D
If there is no specific meaning to each individual id_player* field, consider storing all player IDs in a single player_ids array field.
That way you can use arrayUnion to add values to the field (preventing duplicates) and query with array_contains to find documents with a specific player ID.
I've got a Collection message which contains both a document (representing the chat room) and a nested collection which contains all the messages of that conversation.
Now I'd like to request all the documents (chat rooms) in which my user is involved. So if one of the id1 or id2 fields in the users map is equal to my user id, I collect that document.
I've noticed that I can't use array queries as I'm using maps and not arrays.
So I don't know what would be the best approach to proceed to that query.
Stream<List<ChatRoom>> getChatRooms(String userId) {
final messageCollection = FirebaseFirestore.instance.collection('messages');
// var query = messagesCollection.where('users' in [userId])
See Frank's comment below: The best solution is to "use an array of user IDs (userIDs: ["ABCDEF", "GHIJKL"]) and an array-contains condition. In this use-case it seems that would save on the number of needed indexes (and thus on the cost of storage)".
If you really need to keep the map for other reasons, you can very well have the two fields in the doc. It's not a problem to duplicate the data.
If the value you assign to each userId in the map does not have to be meaningful, you can assign a Boolean value of true and then query as follows:
final messageCollection = FirebaseFirestore.instance.collection('messages');
messageCollection.where('users.id1', isEqualTo: true)
.get()
.then(...);
So, your map will look like:
users
id1: true
id2: true
Clarification:
The idea is to use the user ids as keys and have a value of true. Let's imagine two users with the following ids: "ABCDEF" and "GHIJKL"
Instead of having a map like
users
id1: "ABCDEF"
id2: "GHIJKL"
you could have it as follows:
users
ABCDEF: true
GHIJKL: true
Note that you could very well have the two maps in the doc, if for some reason you really need to keep the first map.
I have the following document structure in firebase:
{
typeId: number,
tripId: number,
locationId: string,
expenseId: number,
createtAt: timestamp
}
I want to query this collection using different 'where' statement everytime. Sometimes user wants to filter by type id and sometimes by locationId or maybe include all of the filters.
But it seems like I would need to create a compound index of each possible permutation? For example: typeId + expenseId, typeId + locationId, location + expenseId, etc, otherwise it doesn't work.
What if I have 20 fields and I want to make it possible to search across all of these?
Could you please help me to construct a query and indexes for the following requirement: Possibility to query across all fields, query can contain one, two, three, all or no fields included in where clause and always has to be ordered descending order by createdAt.
Cloud Firestore automatically creates indexes for the individual fields of your documents. So it can already filter on each field without you have to manually add these indexes.
In many cases it is able to combine these indexes to allow queries on field combinations, by performing a so-called zig-zag-merge-join.
Custom additional indexes are typically only needed once you add an ordering-clause to your query, in addition to filter clauses. If you have such a case, the Firestore client will log an error telling you exactly what index to create (with a link to the Firestore console that is prepopulated to created the index for you).
I have a data model like this:
Person node
Email node
OWNS relationship
LISTS relationship
KNOWS relationship
each Person can OWN one Email and LISTS multiple Emails (like a contact list, 200 contacts is assumed per Person).
The query I am trying to perform is finding all the Persons that OWN an Email that a Contact LISTS and create a KNOWS relationship between them.
MATCH (n:Person {uid:'123'}) -[r1:LISTS]-> (m:Email) <-[r2:OWNS]- (l:Person)
CREATE UNIQUE (n)-[:KNOWS]->[l]
The counts of my current database is as follows:
Number of Person nodes: 10948
Number of Email nodes: 1951481
Number of OWNS rels: 21882
Number of LISTS rels: 4376340 (Each Person has 200 unique LISTS rels)
Now my problem is that running the said query on this current database takes something between 4.3 to 4.8 seconds which is unacceptable for my need. I wanted to know if this is normal timing considering my data model or am I doing something wrong with the query (or even model).
Any help would be much appreciated. Also if this is normal for Neo4j please feel free to suggest other graph databases that can handle this kind of model better.
Thank you very much in advance
UPDATE:
My query is: profile match (n: {uid: '4692'}) -[:LISTS]-> (:Email) <-[:OWNS]- (l) create unique (n)-[r:KNOWS]->(l)
The PROFILE command on my query returns this:
Cypher version: CYPHER 2.2, planner: RULE. 3919222 total db hits in 2713 ms.
Yes, 4.5 seconds to match one person from index along with its <=100 listed email addresses and merging a relationship from user to the single owner of each email, is slow.
The first thing is to make sure you have an index for uid property on nodes with :Person label. Check your indices with SCHEMA command and if missing create such an index with CREATE INDEX ON :Person(uid).
Secondly, CREATE UNIQUE may or may not do the work fine, but you will want to use MERGE instead. CREATE UNIQUE is deprecated and though they are sometimes equivalent, the operation you want performed should be expressed with MERGE.
Thirdly, to find out why the query is slow you can profile it:
PROFILE
MATCH (n:Person {uid:'123'})-[:LISTS]->(m:Email)<-[:OWNS]-(l:Person)
MERGE (n)-[:KNOWS]->[l]
See 1, 2 for details. You may also want to profile your query while forcing the use of one or other of the cost and rule based query planners to compare their plans.
CYPHER planner=cost
PROFILE
MATCH (n:Person {uid:'123'})-[:LISTS]->(m:Email)<-[:OWNS]-(l:Person)
MERGE (n)-[:KNOWS]->[l]
With these you can hopefully find and correct the problem, or update your question with the information to help others help you find it.