Update and retrieve data on Unity using Firebase, performance issue - firebase

I would like to get the gyroscope value of a smartphone and send it to another one. I managed to set a value and retrieve it, however the result is very laggy, is the following method correct?
If no what can I change?
If yes is their another way to set a value and retrieve it in realtime in Unity?
//UPDATING THE VALUE
reference = FirebaseDatabase.DefaultInstance.RootReference;
Dictionary<string, object> gyro = new Dictionary<string, object>();
gyro["keyToUpdate"] = valueToUpdate;
reference.Child("parentKey").UpdateChildrenAsync(gyro);
// RETRIEVING THE VALUE
FirebaseDatabase.DefaultInstance
.GetReference("parentKey")
.Child("keyToUpdate")
.GetValueAsync().ContinueWith(task => {
if (task.IsFaulted) {
Debug.Log("error");
}
else if (task.IsCompleted) {
DataSnapshot snapshot = task.Result;
float valueUpdated = float.Parse(snapshot.Value.ToString());
Debug.Log(valueUpdated);
}
});

Firebase is fundamentally slower than you think it is. This is within its performance boundaries.

With any asynchronous calls, you can never be sure how fast or slow you may receive a response. Keep in mind that Firebase is routed through a system which is layered with elements for dealing with things like authentication and decentralized data.
If you continue to use Firebase, you'll need to make sure your code and UI is set up to allow for possibly long delays which are out of your control. Or you could spend lots of time building your own infrastructure, as DoctorPangloss mentioned.

Related

Firebase Realtime Database Overrides my Data at a location even when I use .push() method

Firebase Realtime Database Overrides my Data at a location even when I use .push() method. The little-concrete knowledge I have about writing to Firebase Realtime database is that writing to Firebase real time database can be done in few several ways. Two of the most prominent are the
set() and 2. push() method.
The long story short, push() is used to create a new key for a data to be written and it adds data to the node.
So fine, firebase has being co-operating with me in my previous projects but in this, I have no idea what is going on. I have tried different blends of push and set to achieve my goal but no progress so far.
In the code below, what I want to achieve is 2 things, write to a location chatUID, message and time only once, but write severally '-MqBBXPzUup7czdG2xCI' all under the same node "firebaseGeneratedId1" ->
A better structure is below.
Help with code. Thanks.
UPDATE
Here is my code
The writers reference
_listeningMsgRef = _msgDatabase
.reference()
.child('users')
.child(userId)
.child('chats')
.child(chatUIDConcat);
When a user hits sendMessage, here is the function called
void sendMessage() {
_messageController.clear();
var timeSent = DateTime.now().toString();
//Send
Map msgMap = {
'message': msg,
'sender': userId,
'time': timeSent,
'chatUID': chatUIDConcat
};
//String _key = _listeningMsgRef.push().key;
_listeningMsgRef.child(chatUIDConcat).set().whenComplete(() {
SnackBar snackBar = const SnackBar(content: Text('Message sent'));
ScaffoldMessenger.of(context).showSnackBar(snackBar);
DatabaseReference push = _listeningMsgRef.child(chatUIDConcat).push().set(msgMap);
});
}
The idea about the sendMessage function, is to write
chatUID:"L8pacdUOOohuTlifrNYC3JALQgh2+q5D38xPXVBTwmwb5Hq..."
message: "I'm coming"
newMessage: "true"
sender: "L8pacdUOOohuTlifrNYC3JALQgh2"
When it is complete, then push new nodes under the user nodes.
EDIT:
I later figured out the issue. I wasn't able to achieve my goal because I was a bit tensed while doing that project. The issue was I was wanted to write new data into the '-MqBBXPzUup7czdG2xCI' node without overwriting the old data in it.
The solution is straight forward. I just needed to ensure I wrote data in that node as new nodes under it. Nothing much, thanks
Frank van Puffelen for your assistance.
Paths in Firebase Realtime Database are automatically created when you write any data under then, and deleted when you remove the last data under them.
So you don't need to first create the node for the chat room. Instead, it gets auto-created when you write the first message into it with _listeningMsgRef.child(chatUIDConcat).push().set(msgMap)

Firebase client-side fan-out for data consistency

From the blow post below
Firebase client-side fan-out for data consistency
Multi-path updates sound awesome. Does that work the same for Multi-path deletes?
Use case: I add a new post and it is fanned-out to many many followers. I decide to delete the post later on. Does the delete work the same? Do you have an example?
You can delete many posts in a single operation, by setting the value for each key to null.
function deletePostFromFollowers(postId, followers) {
var updates = {};
followers.forEach(function(followerId) {
updates['/users/'+followerId+'/posts/+'postId] = null
});
ref.update(updates);
}
deletePostFromFollowers('-K18713678adads', ['uid1', 'uid2']);

PullAsync Silently fails

I'm using .Net backend Azure Mobile Services on a Windows Phone app. I've added the offline services that Azure Mobile Services provides to the code for using SQLite.
The app makes successful Push calls (I can see the data in my database and they exist in the local db created by Azure Mobile Offline Services).
In PullAsync calls, it makes the right call to the Service (The table controller) and Service calculates the results and returns multiple rows from the database. However, the results are lost on the way. My client App is getting an empty Json message (I double checked it with fiddler).
IMobileServiceSyncTable<Contact> _contactTable;
/* Table Initialization Code */
var contactQuery = _contactTable.Where(c => c.Owner == Service.UserId);
await _contactTable.PullAsync("c0", contactQuery);
var contacts = await _contactTable.Select(c => c).ToListAsync();
Any suggestion on how I can investigate this issue?
Update
The code above is using incremental sync by passing query ID of "c0". Passing null to PullAsync for the first argument disables incremental sync and makes it return all the rows, which is working as expected.
await _contactTable.PullAsync(null, contactQuery);
But I'm still not able to get incremental sync to return the rows when app is reinstalled.
I had a similar issue with this, I found that the problem was caused by me making different sync calls the same table.
In my App I have a list of local users, and I only want to pull down the details for the users I have locally.
So I was issuing this call in a loop
userTbl.PullAsync("Updating User Table", userTbl.CreateQuery().Where(x => x.Id == grp1.UserId || x.Id == LocalUser.Id)
What I founds is that by Adding the user Id to the call I over came the issue:
userTbl.PullAsync("User-" + grp1.UserId, userTbl.CreateQuery().Where(x => x.Id == grp1.UserId || x.Id == LocalUser.Id)
As a side note, depending on your id format, the string you supply to the query has to be less than 50 characters in length.
:-)
The .NET backend is set up to not send JSON for fields that have the default value (e.g., zero for integers). There's a bug with the interaction with the offline SDK. As a workaround, you should set up the default value handling in your WebConfig:
config.Formatters.JsonFormatter.SerializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Include;
You should also try doing similar queries using the online SDK (use IMobileServiceTable instead of sync table)--that will help narrow down the problem.
This is a very useful tool for debugging the deserialisation: use a custom delegating handler in your MobileServiceClient instance.
public class MyHandler: DelegatingHandler
{
protected override async Task SendAsync(HttpRequestMessage message, CancellationToken token)
{
// Request happens here
var response = await base.SendAsync(request, cancellationToken);
// Read response content and try to deserialise here...
...
return response;
}
}
// In your mobile client code:
var client = new MobileServiceClient("https://xxx.azurewebsites.net", new MyHandler());
This helped me to solve my issues. See https://blogs.msdn.microsoft.com/appserviceteam/2016/06/16/adjusting-the-http-call-with-azure-mobile-apps/ for more details.

Database broadcast in SignalR

I've implemented the tutorial here Server Broadcast with SignalR and my next step is to hook it up to an SQL db via EF code first.
In the StockTicker class the authors write the following code:
foreach (var stock in _stocks.Values)
{
if (TryUpdateStockPrice(stock))
{
BroadcastStockPrice(stock);
}
}
and the application I am working on needs a real time push news feed with a small audience (around 300 users). What would be the disadvantages of me simply doing something like (pseudo):
foreach (var message in _db.Messages.Where(x => x.Status == "New")
{
BroadcastMessage(message)
}
and what would be the best way to update each message status in the DB to != New without totally compromising performance?
I think the best way to determine whether or not your simple solution compromises performance too much is to try it out.
Something like the following should work for updating each message status.
foreach (var message in _db.Messages.Where(x => x.Status == "New"))
{
BroadcastMessage(message);
message.Status = "Read";
}
_db.SubmitChanges();
If you find this is too inefficient, you could always write a stored procedure that will select new messages and mark them as read.
It might be better to fine tune performance by adjusting the rate you are polling the database and batching messages so you broadcast a single message via SignalR for each DB query even when the DB returns multiple new messages.
If you decide to go stored proc route, here is another fairly in-depth article about using them with EF: http://msdn.microsoft.com/en-us/data/gg699321.aspx

minimum interval for System.Threading.Timer

I need to check memory continuously for a change to notify, and I use System.Threading.Timer to achieve it. I want the notification ASAP, so I need to tun callback method quite often, and I don't want cpu to use 100% to do this.
Can anybody tell me how should I set the interval of this timer? (I think it would be good to set it minimum as possible as)
Thanks
OK, so there is a very basic strategy for how you can be immediately notified of a modification to the dictionary without incurring any necessary CPU cycles and it involves using Monitor.Wait and Monitor.Pulse/Monitor.PulseAll.
On a very basic level, you have something like this:
public Dictionary<long, CometMessage> Messages = new Dictionary<long, CometMessage>();
public void ModifyDictionary(int key, CometMessage value)
{
Messages[key] = value;
Monitor.PulseAll(Messages);
}
public void CheckChanges()
{
while(true)
{
Monitor.Wait(Messages);
// The dictionary has changed!
// TODO: Do some work!
}
}
Now, this is very rudimentary and you could get all sorts of synchronization issues (read/write), so you should look into Marc Gravell's implementation of a blocking queue and apply the same logic to your dictionary (essentially making a blocking dictionary).
Furthermore, the above example will only let you know when the dictionary is modified, but it will not inform you of WHICH element was modified. It's probably better if you take the basics from above and design your system so you know which element was last modified by perhaps storing the key (e.g. last key) and just checking the value associated with it.

Resources