Arcgis Add feature task got cancelled - xamarin.forms

We were trying to add a feature to the feature server using following snippet,
// Add the feature to the table.
await serviceFeatureTable.AddFeatureAsync(feature);
// Apply the edits to the service.
await serviceFeatureTable.ApplyEditsAsync();
It works most of the time but on rare case while adding an asset it take too much time to process and got timeout eventually gives TaskCancelledException, but the operation has been done as the item added in the server.
Does anybody have any ideas how to handle this situation or any thread related to this please share, thanks.

Related

Cosmos DB ChangeFeed Exception Handling

With Cosmos DB ChangeFeed, can anyone please provide some help with exception handling?
Let's say if I have 10 documents in the change feed, I have a loop to iterate through the documents one by one. Let's assume if there was an exception happened after the 5th document that is processed.
What is going to happen with the changefeed?
So far, it looks to me that the entire changefeed is swallowed, i.e. the rest documents after the exception are gone.
I am just wondering what is the backout strategy on this? Is there a way I can completely backout the entire batch so I do not loose any changes.
It is an old question, but hopefully other may find it useful.
To handle the error, the recommended pattern is to wrap your code with try-catch. Catch the error and put that document on a queue (dead-letter). Have a separate program to deal with those document which produced the error. This way if you have 100-document batch, and just one document failed, you do not have to throw away the whole batch.
Second reason is if you can keep getting those documents from Change Feed then you may lose the last snapshot on the document. Change Feed keeps only one last version of the document, in between other processes can come and change the document.
As you keep fixing your code, you will soon find no documents on dead-letter queue.
Azure Function is automatically called by Change Feed system. If you want to roll back the Change Feed and control every aspect of it, you should consider using Change processor Feed SDK.
Recommendation from MS, to add try-catch in your CosmosDB trigger function. If any document throw exception you have to store in place.
Once you will start storing failed messages in some location, you have to build metrics, alerts and re-process strategy.
Below is my strategy to handle this scenario. My One function listing to DB changefeed and pushing data into "Topic" (without any process). I created multiple subscriptions so each subscription maintain own dead-letter queue.

SignalR and SqlDependency refreshment issue - ASP.NET

I have a table in MSSQL database, and I have an ASPX page, I need to push all new rows to the page in a descending order. I found this awesome tutorial which is using SignalR and SqlDependency and it shows only the last row descarding the previous rows which have been added when I'm online, it does that because it has a span element to show data and every time it overwrites this span, so I modified the JavaScript code to append the new data and it works fine.
The problem now is when I refreshed the page for the first time, I'll get the new rows twice, and if I refreshed the page again I'll get the new rows triple .. and so on.
The only solution is to close the application and reopen it again, it looks like reset the IIS.
So, what can I do to avoid duplicating data in the online show?
It is not a SignalR issue, that happens because the mentioned tutorial has a series of mistakes, the most evident being the fact that it continuously creates SqlDependency instances but then it trashes them without never unsubscribing from the OnChange event. You should start by adding something like this:
SqlDependency dependency = sender as SqlDependency;
dependency.OnChange -= dependency_OnChange;
before calling SendNotifications inside your event handler. Check this for some inspiration.
UPDATE (previous answer not fully accurate but kept in place for context)
The main problem here is that this technique creates a sort of auto-regenerating infinite sequence of SqlDependencies from inside instances of Web Forms pages, which makes them unreachable as soon as you page has finished rendering. This means, once your page lifecycle is complete and the page is rendered, the chain of dependencies stays alive and keeps on working even if the page instance which created has finished its cycle. The event handler also keeps the page instance alive even if unreachable, causing a memory leak.
The only way you can control this is actually to generate these chains somewhere else, for example within a static type you can call passing some unique identifier (maybe a combination of page name and username? that depends on your logic). At the first call it will do what currently happens in your current code, but as soon as you do another call with the same parameters it will do nothing, hence the previously created chain will go on being the only one notifying, with no duplicate calls.
It's just a suggestion, there would be many possible solutions, but you need to understand the original problem and the fact that it is practically impossible to remove those chains of auto-regenerating dependencies if you don't find a way to keep track of them and create them only when necessary. I hope that part is clear.
PS: this behavior is very similar to what you get sometimes with event handlers getting leaked and keeping alive objects which should be killed, this is what fooled me with the previous answer. It is in a way a similar problem (leaked objects), but with a totally different cause. The tutorial you follow does not clarify that and brings you to this situation where phantom code keeps on executing and memory is lost.
I got it, although I don't like this way absolutely, I have declared a static member in the Global.asax file and in the Page_Load event I checked its value, if it was true don't run a new instance of SqlDependency, otherwise run it.
if (!Global.PageIsFired)
{
Global.PageIsFired = true;
SqlDependency.Stop(ConfigurationManager.ConnectionStrings["db"].ConnectionString);
SqlDependency.Start(ConfigurationManager.ConnectionStrings["db"].ConnectionString);
SendNotifications();
}
Dear #Wasp,
Your last update helped me a lot to understand the problem, so thank you so much for your time and prompt support.
Dear #dyatchenko,
Thanks a lot for your comments, it was very useful too.

Background Task asp.net

I would like to create a background task which continuously inputs the location from a mobile to a database and in a website, I would like to get the same location immediately as it changes.
I am using an SQL Azure database. so pushing and polling are not an option. Also I am not sure if I can use a cache since the location continuously changes.
I think I have to create some infinite loop which carries on a task continuously. But how does this concept work?
Does this simply involve the create of a thread and a while(true) { ... } ?
I worked on a similar situation, and the approach I went for was to have an special page (/StartJob.aspx?AccessKey=xxxxxxxxxxxxx), that when hit with the right access key, would start a job cycle.
I then setup a "Cron Job" using www.setCronJob.com, to call this page at regular intervals. This service can notify you by email if it fails too.
Have a look at the timer control
http://msdn.microsoft.com/en-us/library/bb386404.aspx
sounds like something that could help out with achieving what you need :)

Strategies for "Pre-Warming" an ASP.NET Cache

I have an ASP.NET MVC 3 / .NET Web Application, which is heavily data-driven, mainly around the concept of "Locations" (New York, California, etc).
Anyway, we have some pretty busy database queries, which get cached after they are finished.
E.g:
public ICollection<Location> FindXForX(string x)
{
var result = _cache.Get(x.ToKey()) as Locaiton; // try cache
if (result == null) {
result = _repo.Get(x.ToKey()); // call db
_cache.Add(x.ToKey(), result); // add to cache
}
return result;
}
But i don't want to the unlucky first user to be waiting for this database call.
The database call can take anywhere from 40-60 seconds, well over the default timeout for an ASP.NET request.
I want to "pre-warm" these calls for certain "popular" locations (e.g New York, California) when my app starts up, or shortly after.
I don't want to simply do this in Global asax (Application_Start), because the app will take too long to start up. (i plan to pre-cache around 15 locations, so that's a few minutes of work).
Is there any way i can fire off this logic asynchronously? Maybe a service on the side is a better option?
The only other alternative i can think of is have an admin page which has buttons for these actions. So an administrator (e.g me) can fire off these queries once the app has started up. That would be the easiest solution.
Any advice?
The quick and dirty way would be to fire-off a Task from Application_Start
But I've found that it's nice to wrap this functionality into a bit of infrastructure so that you can create an ~/Admin/CacheInfo page to let you monitor the progress, state, and exceptions that may be in the process of loading up the cache.
Look into "Always running" app setting for IIS 7.5. What this basically do is have an app pool ready whenever the existing one is to be recycled. Of course, the very first would take the 40-60 seconds but afterwards things would be fast unless you physically restart the machine.
Before you start cache warming, I suggest you check that the query is "as fast as it can be" by first looking at how many logical reads it is doing.
Sounds like you should just dump the results in a separate table and have a scheduled task to repopulate that table periodically.
If one pre-calculated table isn't enough because it ends up with too much data that you need to search through, you could use more than one.
One solution is to launch a worker thread in your Application_Start method that does the pre-warming in the background. If you do it right, your app won't take longer to start up, because the thread will be executed asynchronously.
One option is to use a website health monitoring service. It can be used to both check website health, and if scheduled frequently enough, to invoke your common URLs.
Doing the loading in a Task from Application_Start is the way to go, as mentioned by Scott.
Just be careful - if your site restarts and 10 people try to view California, you don't want to end up with 10 instances of _repo.Get(x.ToKey()); // call db simultaneously trying to load the same data.
It might be a good idea to store a boolean value "IsPreloading" in the application state. Set it to true at the start of your preload function and false at the end. If the value is set, make sure you don't load any of your 15 preloaded locations in FindXForX.
Would suggest taking a look at auto-starting your app, especially if you are load balanced.

Fail to read uncommitted data from same session in MySql/.net connector

I've been banging my head against the wall with this one, may be someone can shed some light as of to what may be causing this behavior.
I have an asp.net (2.0) app that as some point does:
using(TransactionScope scope = new TransactionScope(...))
{
//... do a bunch of some queries
InsertOrder();
InsertOrderDetails();
// do some more logic and queries
ReadOrder(); // reads the newly inserted order OK
ReadOrderDetails(); // HERE'S THE PROBLEM, I CANT READ THE NEWLY INSERTED DETAILS
// do more inserts....
scope.Complete();
}
Some more contact Info:
MySql5.0.27 community
MySql/net connector 5.2.3
Order and OrderDetails are InnoDB with FK constraints
Polling enabled (although I've tried turning polling off and has the same behavior)
I've tried setting different isolation levels in the transaction just in case with the same behavior, but this is the same connection so it shouldn't matter right?
Anyone has any ideas on what may be causing this?
Any help would be greatly appreciated
Jaime
My guess is that the different functions you're calling are picking up different connections, so they don't see the uncommitted changes from the transaction.
One way of check this is to get the connection ID and compare it.
I was actually doing a stupid thing with the query preventing it from returning any result... Nothing to do with the transaction or MySql

Resources