Storing timestamp in joining node value instead of Boolean in Firebase database - firebase

Say that I have node user, item and user_items used to join them.
Typically one would(as advised in official documents and videos) use such a structure:
"user_items": {
"$userKey": {
"$itemKey1": true,
"$itemKey2": true,
"$itemKey3": true
}
}
I would like to use the following structure instead:
"user_items": {
"$userKey": {
"$itemKey1": 1494912826601,
"$itemKey2": 1494912826602,
"$itemKey3": 1494912826603
}
}
with values being a timestamp value. So that i can order them by creation date also while being able to tell the associated time. Seems like killing two birds with one stone situation. Or is it?
Any down sides to this approach?
EDIT: Also I'm using this approach for the boolean fields such as: approved_at, seen_at,... etc instead of using two fields like:
"some_message": {
"is_seen": true,
"seen_timestamp": 1494912826602,
}

You can model your database in every way you want, as long as you follow Firebase rules. The most important rule is to have the data as flatten as possible. According to this rule your database is structured correctly. There is no 100% solution to have a perfect database but according to your needs and using one of the following situations, you can consider that is a good practice to do it.
1. "$itemKey1": true,
2. "$itemName1": true,
3. "$itemKey1": 1494912826601,
4. "$itemName1": 1494912826601,
What is the meaning of "$itemKey1": 1494912826601,? Beacause you already have set a timestamp, means that your item was uploaded into your database and is linked to the specific user, which means also in other words true. So is not a bad approach to do something like this.
Hope it helps.

Great minds must think alike, because I do the exact same thing :) In my case, the "items" are posts that the user has upvoted. I use the timestamps with orderBy(), along with limitToLast(50) to get the "last 50 posts that the user has upvoted". And from there they can load more. I see no downsides to doing this.

Related

How to store keywords in firebase firestore

My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.

CosmosDB Ladder pattern

I'm reading this documentation https://blogs.msdn.microsoft.com/mvpawardprogram/2016/03/15/going-social-with-documentdb/ … it talks about a “Ladder pattern” but there are no examples and I can't seem to find this anywhere else. Could I get a little more direction on this concept?
I want to update all data duplicates with a pattern so i can update the main records and don't need to worry about it updating it everywhere its duplicated or reference.
You are looking at old documentation,
check the updated one on The “Ladder” pattern and data duplication
Let’s take user information as an example:
{
"id":"dse4-qwe2-ert4-aad2",
"name":"John",
"surname":"Doe",
"address":"742 Evergreen Terrace",
"birthday":"1983-05-07",
"email":"john#doe.com",
"twitterHandle":"#john",
"username":"johndoe",
"password":"some_encrypted_phrase",
"totalPoints":100,
"totalPosts":24
}
By looking at this information, we can quickly detect which is critical information and which isn’t, thus creating a “Ladder”:

Structure Firebase data

How would I structure my data in firebase to retrieve all posts that the current user has not commented on. I am very new to nosql and I can't seem to get my head out of a SQL way of structuring it.
This is my attempt at it:
Posts: {
someUniqueId: {
user: userid,
content: "blah"
}
}
Comments: {
someCommentUniqueId: {
comment: "ola",
post: someUniqueId,
user: userid
}
}
Now if the above is correct, I have absolutely no idea how I would query this. Is it even possible in NOSQL?
Firebase does not have a mechanism to query for the absence of a value. See is it possible query data that are not equal to the specified condition?
In NoSQL you often end up modeling data for the queries that you need. So if you want to know which posts each user still can comment on, model that information in your JSON tree:
CommentablePosts_per_User
$uid
$postid: true
This type of structure is often called an index, since it allows you to efficiently look up the relevant $postid values for a given user. The process of extracting such indexes from the data is often called denormalization. For a (somewhat older) overview of this technique, see this Firebase blog post on denormalization.
I recommend this article as a good introduction to NoSQL data modeling.
If I may suggest a couple of options:
Posts:
someUniquePostId:
user_id_0: false
user_id_1: true
comment: "dude, awesome post"
user_id_2: false
user_id_3: true
comment: "wicked!"
drive space is cheap, so storing all the user id's within the post would allow you to easily select which posts user_id_0 has not commented on by query'ing for user_id_0: false.
Alternatively you could flip the logic
Posts:
post_id_0:
user_id_1: "dude, awesome post"
user_id_3: "wicked"
post_id_1:
user_id_0: "meh"
user_id_2: "sup?"
Users:
user_id_0:
no_posts:
post_id_0: true
user_id_1:
no_posts:
post_id_1: true
This would enable you to query which posts each user has not posted to: in this case, user_id_0 has not posted to post_id_0 and user_id_1 has not posted to post_id_1
Of course, depending on the situation, you can also lean on client logic to get the data you need. For example, if you only care about which posts a user didn't comment on yesterday, you could read query them by .value of yesterday and do a comparison in code to see if their user_id is a child of the post. Obviously avoiding this if the dataset is large.

APIGEE querying data that DOESN'T match condition

I need to fetch from BaaS data store all records that doesn't match condition
I use query string like:
https://api.usergrid.com/<org>/<app>/<collection>?ql=location within 10 of 30.494697,50.463509 and Partnership eq 'Reject'
that works right (i don't url encode string after ql).
But any attempt to put "not" in this query cause "The query cannot be parsed".
Also i try to use <>, !=, NE, and some variation of "not"
How to configure query to fetch all records in the range but Partnership NOT Equal 'Reject' ?
Not operations are supported, but are not performant because it requires a full scan. When coupled with a geolocation call, it could be quite slow. We are working on improving this in the Usergrid core.
Having said that, in general, it is much better to inverse the call if possible. For example, instead of adding the property when the case is true, always write the property to every new entity (even when false), then edit the property when the case is true.
Instead of doing this:
POST
{
'name':'fred'
}
PUT
{
'name':'fred'
'had_cactus_cooler':true
}
Do this:
POST
{
'name':'fred'
'had_cactus_cooler':'no'
}
PUT
{
'name':'fred'
'had_cactus_cooler':'yes'
}
In general, try to put your data in the way you want to get it out. Since you know upfront that you want to query on whether this property exists, simply add it, but with a negative value. The update it when the condition becomes true.
You should be able to use this syntax:
https://api.usergrid.com/<org>/<app>/<collection>?ql=location within 10 of 30.494697,50.463509 and not Partnership eq 'Reject'
Notice that the not operator comes before the expression (as indicated in the docs).

Dynamics GP Web Service -- Returning list of sales order based on specific criteria

For a web application, I need to get a list or collection of all SalesOrders that meet the folowing criteria:
Have a WarehouseKey.ID equal to "test", "lucmo" or "Inno"
Have Lines that have a QuantityToBackorder greater than 0
Have Lines that have a RequestedShipDate greater than current day.
I've succesfully used these two methods to retrieve documents, but I can't figure out how return only the ones that meet above criteria.
http://msdn.microsoft.com/en-us/library/cc508527.aspx
http://msdn.microsoft.com/en-us/library/cc508537.aspx
Please help!
Short answer: your query isn't possible through the GP Web Services. Even your warehouse key isn't an accepted criteria for GetSalesOrderList. To do what you want, you'll need to drop to eConnect or direct table access. eConnect has come a long way in .Net if you use the Microsoft.Dynamics.GP.eConnect and Microsoft.Dynamics.GP.eConnect.Serialization libraries (which I highly recommend). Even in eConnect, you're stuck with querying based on the document header rather than line item values, though, so direct table access may be the only way you're going to make it work.
In eConnect, the key piece you'll need is generating a valid RQeConnectOutType. Note the "ForList = 1" part. That's important. Since I've done something similar, here's what it might start out as (you'd need to experiment with the capabilities of the WhereClause, I've never done more than a straightforward equal):
private RQeConnectOutType getRequest(string warehouseId)
{
eConnectOut outDoc = new eConnectOut()
{
DOCTYPE = "Sales_Transaction",
OUTPUTTYPE = 1,
FORLIST = 1,
INDEX1FROM = "A001",
INDEX1TO = "Z001",
WhereClause = string.Format("WarehouseId = '{0}'", warehouseId)
};
RQeConnectOutType outType = new RQeConnectOutType()
{
eConnectOut = outDoc
};
return outType;
}
If you have to drop to direct table access, I recommend going through one of the built-in views. In this case, it looks like ReqSOLineView has the fields you need (LOCNCODE for the warehouseIds, QTYBAOR for backordered quantity, and ReqShipDate for requested ship date). Pull the SOPNUMBE and use them in a call to GetSalesOrderByKey.
And yes, hybrid solutions kinda suck rocks, but I've found you really have to adapt if you're going to use GP Web Services for anything with any complexity to it. Personally, I isolate my libraries by access type and then use libraries specific to whatever process I'm using to coordinate them. So I have Integration.GPWebServices, Integration.eConnect, and Integration.Data libraries that I use practically everywhere and then my individual process libraries coordinate on top of those.

Resources