Queries in Realtime-database (using LimitToLast) are very very slow - firebase

I'm using RealTime-database(Firebase 7.3.2) and Unity.
When I'm using the LimitToLast() method the query takes a long time(1,5 to 2 minutes) to return a reponse.
But when I load the whole data or execute this query without the LimitToLast method this takes not a long time.
I want to ask if everyone has this problem during his development with realtime firebase database.
My database contains 1700 rooms.
this is the query :
var result = await FirebaseDatabase.DefaultInstance.GetReference("Rooms")
.OrderByChild("CreationDate").LimitToLast(10).GetValueAsync();
And that is the structur of rooms collection in database:
{
"Rooms" : {
"-Lp860kFH8TjdAsPpar1" : {
"CreationDate" : -14400,
"Title" : "Room 1",
...,
},
"-Lp860kFH8TjdAsPpbr2" : {
"CreationDate" : -14402,
"Title" : "Room 2",
...,
},
...
"-Lp860kFH8TjdAsPpar3" : {
"CreationDate" : -14404,
"Title" : "Room 1700",
...,
}
}
}

Are you sure you have indexing done in your Firebase Realtime Database Security Rules? If its not done, then the query is executed as follows:
1. Download all the data from the "Rooms" branch to the Unity client.
2. Sort the data according to your ordering criteria on the Unity client.
3. Discard all except the last 10 children in this sorted data.
I'm sure nobody would want to do that if you want to get just the last 10 children. The ordering and limiting to last 10 children should happen on the database server itself
which will ensure it to be fast enough to give you the result in milliseconds. For that, you'll have to index your data and then run your queries.

Related

DynamoBD/Amplify non-negative field and field validation on mutations

I am new to AWS in general, I am building a relatively simple application with Amplify, but I've used Google Firebase before. My question is: Is there a way to set a constrain for a field to be non-negative? I have an application that does transactions and I don't want my balance to be negative. I just need a simple error/exception. Is it possible to set a field constraint in DynamoDB that says "This field should be >= 0"?.
I also checked if it was possible to do it in the VTL amplify generated resolver of my graphql mutation, and indeed it is possible to set some constraints, But somehow it allows the operation and crashes on the next one (when the balance on the DB is already < 0, like if it checks it before the update). I tried saying something like "current_balance - transaction >= 0" but I couldn't get it to work.
So it seems that the only way is to create a custom lambda resolver that does the various checks before submitting the mutation to DynamoDB. I haven't tried it yet but I don't understand how I can do a check on the current balance (stored in the DB) without doing a query.
More in general is it even possible to validate fields (even with simple assertions like non-negative) on amplify/dynamoDB? Moving to another DB like Aurora would help?
Thanks for you help
DynamoDb supports conditional updates which allow an update to be applied when the given condition is met. You can set the condition current_balance >= cost for your update.
However, the negative balance is not the main problem. What you should address is how to prevent other requests from updating the same current_balance at the same time, or in short, race conditions on current_balance. In order to deal with that, you also need a conditional update whose condition is "current_balance = initial_balance". The initial_balance is, I guess, what you get from DynamoDB at the very beginning of the purchase process.
Sample VTL code
#set( $remaining_balance = $initial_balance - $transaction_cost )
#if( $remaining_balance < 0 )
$util.error("Insufficient balance")
#end
{
"version" : "2018-05-29",
"operation" : "UpdateItem",
"key": { <your-dynamodb-key> },
"update" : {
"expression" : "SET current_balance = :remaining_balance",
"expressionValues" : {
":remaining_balance" : $util.dynamodb.toNumberJson($remaining_balance)
}
},
"condition": {
"expression": "current_balance = :initial_balance",
"expressionValues" : {
":initial_balance" : $util.dynamodb.toNumberJson($initial_balance)
}
}
}

Firebase simple many to many relationship

I am bit familiar with NoSQL and Firebase Realtime Database and also I know that it is not best solution to solve tasks where relational database should be more appropriate. I want to verify about structure of simple many to many relationship that I have.
I have events and users. I want to use Firebase for storing information about users participating in events, later I will need to
Get list of users for event knowing it's id and city
Get list of events for users knowing it's id and city
add or delete information about user attending to event
I would like to have first tree of events ids divided by cities.
events {
'city1' : {
event_id_1 : {'user_1', 'user_2'},
event_id_2 : {'user_3', 'user_4'}.
}
'city2' : {
event_id_3 : {'user_5', 'user_6'},
event_id_4 : {'user_7', 'user_7'}.
}
}
And second tree for users
users {
'user1' : {
'city1' : {event_id_1, event_id_2},
'city2' : {event_id_3, event_id_4},
'city3' : {event_id_3, event_id_4}
},
'user2' : {
'city1' : {event_id_1, event_id_2},
'city2' : {event_id_3, event_id_4},
'city3' : {event_id_3, event_id_4}
},
'user3' : {
'city1' : {event_id_1, event_id_2},
'city2' : {event_id_3, event_id_4},
'city3' : {event_id_3, event_id_4}
},
}
Would it be easy and fast to use and maintain?
Your structure looks pretty OK to me given the requirements listed. Most importantly: you store the data in both directions already, which is the biggest hurdle for many developers new to NoSQL data modeling.
A few notes about your data model, though most are on the level of typos:
Be sure to store the data as maps, not arrays. So event_id_1 : {'user_1': true, 'user_2': true }
If there is a many-to-many relationship between users and events, I'd usually have four top-level lists: users and events (for the primary information about each), and then userEvents and eventUsers (for connections between the two).
Adding a user to an event can be done with a single multi-location update, e.g.:
ref.update({
'/userEvents/userId1/eventId1': true,
'/eventUsers/eventId1/userId1': true
});
Unregistering them is a matter of doing the same with null as the value (which deletes the existing key):
ref.update({
'/userEvents/userId1/eventId1': null,
'/eventUsers/eventId1/userId1': null
});
Also see my answer here: Many to Many relationship in Firebase
You can do this:
List of user
Users
useruid
name:userx
email:userx#gmail.com
useruid
name:usery
email:usery#gmail.com
Events
eventid
useruid
name:userx
location: city1
eventname: party
eventid2
useruid1
name:usery
location: city2
eventname: Boring Party
Get list of users for event knowing it's id and city:
DatabaseReference ref=FirebaseDatabase.getInstance().getReference().child("Events").child(eventid);
ref.orderByChild("location").equalTo(city1);
//retrieve users using a listener
Get list of events for users knowing it's id and city:
DatabaseReference ref=FirebaseDatabase.getInstance().getReference().child("Events");
Query q=ref.orderByChild("location").equalTo(city1);
using a listener this can give you the events that are in location:city1

Firebase one to one chat with Angular

I would like to make a one to one chat. Each user can contact another user.
Json structure would be :
{
"messages" :
"user1UID_user2UID" : {
auto generated ID : {
"text" : "hello",
"timestamp" : 192564646546,
"name" : "user1"
},
auto generated ID : {
"text" : "hi",
"timestamp" : 192564646554,
"name" : "user2"
}
}
}
When user1 connects to the app, he can see the list of every conversation of which he is a part.
Let's say he had initiated a conversation with user 2, and user 3 has a conversation with him too.
So we would have the following children :
user1UID_user2UID
user3UID_user1UID
How can I retrieve all the conversations User1 is involved in to ?
constructor(db: AngularFireDatabase) {
this.messages= db.list('/messages/' + user1UID + "_" + user2UID); //but I don't know user2UID at this moment
}
Can I make a Regex or do I have to store the conversation key (somewhere) every time it concerns him ?
Or I'm completely wrong and I do not look at the problem the right way?
The key naming schema you use for the chat rooms is a variant of my answer here: http://stackoverflow.com/questions/33540479/best-way-to-manage-chat-channels-in-firebase. It's a variant, since you don't seem to order the UIDs lexicographically, which I recommend.
All my proposed algorithm does is generate a reproducible, unique, idempotent key for a chat room between specific users. And while those are very important properties for a data model, they don't magically solve other use-cases.
As often the case in NoSQL data modeling, you'll have to model the data to fit with the use-cases you want. So if your app requires that you show a list of chat rooms for each user, then you should include in your data model a list of chat rooms for each user:
userChatRooms
user1UID
user1UID_user2UID
user1UID_user3UID
user2UID
user1UID_user2UID
user1UID_user3UID
user3UID
user1UID_user3UID
Now getting a list of the chat rooms for a user is as easy as reading /userChatRooms/$uid.

Structure a NoSQL database for a chat application (using FireBase)

Coming from years of using relational databases, i am trying to develop a pretty basic chat/messaging app using FireBase
FireBase uses a NoSQL data structure approach using JSON formatted strings.
I did a lot of research in order to understand how to structure the database with performance in mind. I have tried to "denormalize" the structure and ended up with the following:
{
"chats" : {
"1" : {
"10" : {
"conversationId" : "x123332"
},
"17": {
"conversationId" : "x124442"
}
}
},
"conversations" : {
"x123332" : {
"message1" : {
"time" : 12344556,
"text" : "hello, how are you?",
"userId" : 10
},
"message2" : {
"time" : 12344560,
"text" : "Good",
"userId" : 1
}
}
}
}
The numbers 1, 10, 17 are sample user id's.
My question is, can this be structured in a better way? The goal is to scale up as the app users grow and still get the best performance possible.
Using the document-oriented database structure such Firestore, you can store the conversations as below;
{
"chat_rooms":[
{
"cid":100,
"members":[1, 2],
"messages":[
{"from":1, "to":2, "text":"Hey Dude! Bring it"},
{"from":2, "to":1, "text":"Sure man"}
]
},
{
"cid":101,
"members":[3, 4],
"messages":[
{"from":3, "to":4, "text":"I can do that work"},
{"from":4, "to":3, "text":"Then we can proceed"}
]
}
]
}
Few examples of NoSQL queries you could run through this structure.
Get all the conversations of a logged-in user with the user id of 1.
db.chat_rooms.find({ members: 1 })
Get all the documents, messages sent by the user id of 1.
db.chat_rooms.find({ messages: { from: 1 } })
The above database structure is also capable of implementing in RDMS database as table relationships using MySQL or MSSQL. This is also can be implemented for group chat room applications.
This structure is optimized to reduce your database document reading usage which can save your money from paying more for infrastructure.
According to our above example still, you will get 2 document reads since we have 4 messages but if you store all the messages individually and run the query by filtering sender id, you will get 4 database queries which are the kind of massive amount when you have heavy conversation histories in your database.
One case for storing messages could look something like this:
"userMessages":
{ "simplelogin:1":
{ "simplelogin:2":
{ "messageId1":
{ "uid": "simplelogin:1",
"body": "Hello!",
"timestamp": Firebase.ServerValue.TIMESTAMP },
"messageId2": {
"uid": "simplelogin:2",
"body": "Hey!",
"timestamp": Firebase.ServerValue.TIMESTAMP }
}
}
}
Here is a fireslack example this structure came from. This tutorial builds an app like slack using firebase:
https://thinkster.io/angularfire-slack-tutorial
If you want something more specific, more information would be helpful.

Firebase indexing on huge lists (100000+ items)

I'm migrating my relational database to Firebase. In general, I have a planner for workers. They can add an item ('appointment') to their schedule. I've read the FireBase documentation, and found a section on indexing.
So I've created following structure (date = YYYYMMDD and time = HHMMSS):
{
appointments :
'id1' : { 'date' : '20141207', 'time' : '170000', worker : 'worker1' },
'id2' : { 'date' : '20141208', 'time' : '170000', worker : 'worker1' }
}
I've added an index for date, time and worker, to be able to query data like this (e.g. fetch all appointments for today):
curl -X GET 'https://myapp.firebaseio.com/appointments.json?orderBy="date"&equalsTo="20141207"'
This works as expected and does the job well. The problem is, the number of appointments can grow exponentially (about a year from now, there could be 100000+ appointments). Is it a good approach to use these indexes? Another option would be to store the date and time also separately, like this:
{
'20141207' :
{ '170000' : { 'id1' : true } },
'20141208' :
{ '170000' : { 'id2' : true } }
}
In order to ensure that appointments can be fetched per day very fast. Or is FireBase able to handle this just using indexes?
The number of records in the path won't be an issue; Firebase is a scalable, real-time back end that handles hundreds of thousands of concurrent connections and millions of nodes. Querying should be fast. This is the point of an index and, like all things Firebase, must meet our standards of speed and excellence.
Be sure to read about '.indexOn' and to implement this in your security rules:
{
"rules": {
"appointments": {
".indexOn": ["date", "time", "worker"]
}
}
}
Also, your real limitation here will be the bandwidth of transferring data over the tubes, so be sure to limit your results in some manner and paginate:
curl -X GET 'https://myapp.firebaseio.com/appointments.json?orderBy="date"&equalsTo="20141207"&limitToFirst=100'

Resources