I'm using .Net backend Azure Mobile Services on a Windows Phone app. I've added the offline services that Azure Mobile Services provides to the code for using SQLite.
The app makes successful Push calls (I can see the data in my database and they exist in the local db created by Azure Mobile Offline Services).
In PullAsync calls, it makes the right call to the Service (The table controller) and Service calculates the results and returns multiple rows from the database. However, the results are lost on the way. My client App is getting an empty Json message (I double checked it with fiddler).
IMobileServiceSyncTable<Contact> _contactTable;
/* Table Initialization Code */
var contactQuery = _contactTable.Where(c => c.Owner == Service.UserId);
await _contactTable.PullAsync("c0", contactQuery);
var contacts = await _contactTable.Select(c => c).ToListAsync();
Any suggestion on how I can investigate this issue?
Update
The code above is using incremental sync by passing query ID of "c0". Passing null to PullAsync for the first argument disables incremental sync and makes it return all the rows, which is working as expected.
await _contactTable.PullAsync(null, contactQuery);
But I'm still not able to get incremental sync to return the rows when app is reinstalled.
I had a similar issue with this, I found that the problem was caused by me making different sync calls the same table.
In my App I have a list of local users, and I only want to pull down the details for the users I have locally.
So I was issuing this call in a loop
userTbl.PullAsync("Updating User Table", userTbl.CreateQuery().Where(x => x.Id == grp1.UserId || x.Id == LocalUser.Id)
What I founds is that by Adding the user Id to the call I over came the issue:
userTbl.PullAsync("User-" + grp1.UserId, userTbl.CreateQuery().Where(x => x.Id == grp1.UserId || x.Id == LocalUser.Id)
As a side note, depending on your id format, the string you supply to the query has to be less than 50 characters in length.
:-)
The .NET backend is set up to not send JSON for fields that have the default value (e.g., zero for integers). There's a bug with the interaction with the offline SDK. As a workaround, you should set up the default value handling in your WebConfig:
config.Formatters.JsonFormatter.SerializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Include;
You should also try doing similar queries using the online SDK (use IMobileServiceTable instead of sync table)--that will help narrow down the problem.
This is a very useful tool for debugging the deserialisation: use a custom delegating handler in your MobileServiceClient instance.
public class MyHandler: DelegatingHandler
{
protected override async Task SendAsync(HttpRequestMessage message, CancellationToken token)
{
// Request happens here
var response = await base.SendAsync(request, cancellationToken);
// Read response content and try to deserialise here...
...
return response;
}
}
// In your mobile client code:
var client = new MobileServiceClient("https://xxx.azurewebsites.net", new MyHandler());
This helped me to solve my issues. See https://blogs.msdn.microsoft.com/appserviceteam/2016/06/16/adjusting-the-http-call-with-azure-mobile-apps/ for more details.
Related
I am using Realm and building a Swift mobile app. I am really struggling to understand why and when Partial realms are created.
Here is my scenario:
a user logs in to the app and is brought to the first view controller.
In the first view controller in view did load, I am executing a query to get the current user, subscribing to the query and adding an observer to let me know when the data is synced:
let currentUserArr = realm.objects(DBUser.self).filter("id == %#", userId)
self.subscription = currentUserArr.subscribe(named: "current user")
self.subscriptionToken = self.subscription.observe(\.state, options: .initial) { state in
switch state {
case .creating:
print("creating")
case .pending:
print("pending")
case .complete:
print("complete")
self.artist = currentUserArr[0]
case .invalidated:
print("invalidated")
case .error(let err):
//seal.reject(err)
print(err)
}
}
This makes sense that if I check Realm Cloud, I have a new partial realm created with path as:
/db/__partial/DyeOy3OR4sNsqMi2OmDQQEzUa8F3/~7f11cf52
However, here is where my confusion starts. I log the user out. I log back in and again the code above executes. My thought would be that Realm would just reuse the partial already created, but instead it creates an entirely new partial.
/db/__partial/DyeOy3OR4sNsqMi2OmDQQEzUa8F3/~8bc7bc49
Is this by design or should I somehow be reusing partials rather than having a new one created every time a query is executed (even if it is executed by the same user)?
I have posted on Realm Forums as well:
https://forums.realm.io/t/realm-platform-realm-path-partial-s/2833
I don't believe I was actually logging the current sync user out. Upon further testing, once I did log out and log back in, the existing partial was re-used. This is a non-issue.
I would like to get the gyroscope value of a smartphone and send it to another one. I managed to set a value and retrieve it, however the result is very laggy, is the following method correct?
If no what can I change?
If yes is their another way to set a value and retrieve it in realtime in Unity?
//UPDATING THE VALUE
reference = FirebaseDatabase.DefaultInstance.RootReference;
Dictionary<string, object> gyro = new Dictionary<string, object>();
gyro["keyToUpdate"] = valueToUpdate;
reference.Child("parentKey").UpdateChildrenAsync(gyro);
// RETRIEVING THE VALUE
FirebaseDatabase.DefaultInstance
.GetReference("parentKey")
.Child("keyToUpdate")
.GetValueAsync().ContinueWith(task => {
if (task.IsFaulted) {
Debug.Log("error");
}
else if (task.IsCompleted) {
DataSnapshot snapshot = task.Result;
float valueUpdated = float.Parse(snapshot.Value.ToString());
Debug.Log(valueUpdated);
}
});
Firebase is fundamentally slower than you think it is. This is within its performance boundaries.
With any asynchronous calls, you can never be sure how fast or slow you may receive a response. Keep in mind that Firebase is routed through a system which is layered with elements for dealing with things like authentication and decentralized data.
If you continue to use Firebase, you'll need to make sure your code and UI is set up to allow for possibly long delays which are out of your control. Or you could spend lots of time building your own infrastructure, as DoctorPangloss mentioned.
Should I use akavache as a primary local database in my xamarin forms application or a cache database on top of another sqlite database? Because I cant find any example of how to update, insert, delete data in akavache object. For example,
CacheItems= await BlobCache.LocalMachine.GetOrFetchObject<List<Item>>("CacheItems",
async () => await getdatafromAzure(recursive));
I am getting items from azure and store in local machine and these items are editable / deleteable or user can add a new item. How do I do it?
Anything saved to LocalMachine gets persisted physically to the device. So on app or device restart it'll still be there (if the user hasn't removed the app or cleared the data that is)
As far as how to access/save there's lots of good samples here
https://github.com/akavache/Akavache
Insert Object and Get Object are your basic access methods and then there's lots of extension methods like GetOrFetch, GetAndFetch, which are very useful
Here's a quick sample I haven't super tested to give one way to access stuff. It'd probably be better to use some of the extension methods but I figure an example like this is conceptually useful.
BlobCache.LocalMachine.GetObject<Tobject>(someKEy)
.Catch((KeyNotFoundException ke) => Observable.Return<Tobject>(null))
.SelectMany(result =>
{
//object doesn't exist in cache so create a new one
if (result == null)
result = new Tobject();
//apply whatever updates you are wanting to do
//result.SomeField = "bob";
//This will replace or insert data
return BlobCache.LocalMachine.InsertObject(someKEy, result);
})
.Subscribe();
It's really all pretty boring stuff :-p Just get an object and store an object. Under the hood Akavache does a lot of really cool optimizations and synchronizations around that boring stuff though allowing it to be boring for the rest of us
In most of my cases when I start up a VM I retrieve the object from the cache and then just store it as some property on the VM or inside some service class. Then when any changes are made to the object I just insert it into the cache
BlobCache.LocalMachine.InsertObject(someKEy, result).Subscribe()
At that point I know now if the app closes down I'll have the latest version of that object right when user starts up app
The examples I gave are more the full Rx way of accessing... What you have in your original question works fine
await BlobCache.LocalMachine.GetOrFetchObject<List<object>>("CacheItems",
async () => await getdatafromAzure(recursive));
That will basically check the cache if it doesn't exist then it goes to Azure...
LocalMachine stores to physical device, InMemory just stores to some internal dictionary that goes away once the app is unloaded from memory, and UserAccount works with NT remoting accounts.
I've implemented the tutorial here Server Broadcast with SignalR and my next step is to hook it up to an SQL db via EF code first.
In the StockTicker class the authors write the following code:
foreach (var stock in _stocks.Values)
{
if (TryUpdateStockPrice(stock))
{
BroadcastStockPrice(stock);
}
}
and the application I am working on needs a real time push news feed with a small audience (around 300 users). What would be the disadvantages of me simply doing something like (pseudo):
foreach (var message in _db.Messages.Where(x => x.Status == "New")
{
BroadcastMessage(message)
}
and what would be the best way to update each message status in the DB to != New without totally compromising performance?
I think the best way to determine whether or not your simple solution compromises performance too much is to try it out.
Something like the following should work for updating each message status.
foreach (var message in _db.Messages.Where(x => x.Status == "New"))
{
BroadcastMessage(message);
message.Status = "Read";
}
_db.SubmitChanges();
If you find this is too inefficient, you could always write a stored procedure that will select new messages and mark them as read.
It might be better to fine tune performance by adjusting the rate you are polling the database and batching messages so you broadcast a single message via SignalR for each DB query even when the DB returns multiple new messages.
If you decide to go stored proc route, here is another fairly in-depth article about using them with EF: http://msdn.microsoft.com/en-us/data/gg699321.aspx
I am new to caching and trying to understand how it works in general. Below is code snippet from ServiceStack website.
public object Get(CachedCustomers request)
{
//Manually create the Unified Resource Name "urn:customers".
return base.RequestContext.ToOptimizedResultUsingCache(base.Cache, "urn:customers", () =>
{
//Resolve the service in order to get the customers.
using (var service = this.ResolveService<CustomersService>())
return service.Get(new Customers());
});
}
public object Get(CachedCustomerDetails request)
{
//Create the Unified Resource Name "urn:customerdetails:{id}".
var cacheKey = UrnId.Create<CustomerDetails>(request.Id);
return base.RequestContext.ToOptimizedResultUsingCache(base.Cache, cacheKey, () =>
{
using (var service = this.ResolveService<CustomerDetailsService>())
{
return service.Get(new CustomerDetails { Id = request.Id });
}
});
}
My doubts are:
I've read that cached data is stored in RAM on same/distributed server. So, how much data can it handle, suppose in first method if customers count is more than 1 million, doesn't it occupy too much memory.
In general case, do we apply caching only for GET operations and invalidate if it gets UPDATE'd.
Please suggest any tool to check memory consumption of caching.
I think you can find the answers to your questions here -https://github.com/ServiceStack/ServiceStack/wiki/Caching
I've read that cached data is stored in RAM on same/distributed server...
There are several ways to 'persist' cached data. Again, see here - https://github.com/ServiceStack/ServiceStack/wiki/Caching. 'InMemory' is the option you seem to be questioning. The other options don't have the same impact on RAM.
In general case, do we apply caching only for GET operations and invalidate if it gets UPDATE'd.
In ServiceStack you can manually clear/invalidate the cache or have a time based expiration. If you manually clear the cache I would recommend doing so on DELETES and UPDATES. You are free to choose how you manage/invalidate the cache. You just want to avoid having stale data in your cache. As far as 'apply caching' you would return cached data on GET operations, but your system can access cached data just like any other data store. Again, you just need recognize the cache my not have the most recent set of data.