Application insights durations metrics - azure-application-insights

I have an Azure Function App. I want to log the duration for some specific part of the code in that Function App. Where would be the right place to store this in? I can see dependencies is the only collection that has a duration property. But documentation states this collection is mostly SQL server, Storage etc. Page views and request has this too, but does not seems like the right place. Any pointers where you add this kind of monitoring?

You can log them as customMetrics.
Use the LogMetric extension method on ILogger instance to create custom metrics.
public static void Run([BlobTrigger("samples-workitems/{name}", Connection = "AzureWebJobsStorage")]Stream myBlob, string name, ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
//specify the name of the metric as the first parameter, and fill in the duration time as the 2nd parameter.
log.LogMetric("duration_time", 119);
}
Then in azure portal -> application insights -> logs, you can check the duration time in customMetrics table:

Related

How to set a field for every document in a Cosmos db?

What would a Cosmos stored procedure look like that would set the PumperID field for every record to a default value?
We are needing to do this to repair some data, so the procedure would visit every record that has a PumperID field (not all docs have this), and set it to a default value.
Assuming a one-time data maintenance task, arguably the simplest solution is to create a single purpose .NET Core console app and use the SDK to query for the items that require changes, and perform the updates. I've used this approach to rename properties, for example. This works for any Cosmos database and doesn't require deploying any stored procs or otherwise.
Ideally, it is designed to be idempotent so it can be run multiple times if several passes are required to catch new data coming in. If the item count is large, one could optionally use the SDK operations to scale up throughput on start and scale back down when finished. For performance run it close to the endpoint on an Azure Virtual Machine or Function.
For scenarios where you want to iterate through every item in a container and update a property, the best means to accomplish this is to use the Change Feed Processor and run the operation in an Azure function or VM. See Change Feed Processor to learn more and examples to start with.
With Change Feed you will want to start it to read from the beginning of the container. To do this see Reading Change Feed from the beginning.
Then within your delegate you will read each item off the change feed, check it's value and then call ReplaceItemAsync() to write back if it needed to be updated.
static async Task HandleChangesAsync(IReadOnlyCollection<MyType> changes, CancellationToken cancellationToken)
{
Console.WriteLine("Started handling changes...");
foreach (MyType item in changes)
{
if(item.PumperID == null)
{
item.PumperID = "some value"
//call ReplaceItemAsync(), etc.
}
}
Console.WriteLine("Finished handling changes.");
}

Partial Realms - Why and When are They Created?

I am using Realm and building a Swift mobile app. I am really struggling to understand why and when Partial realms are created.
Here is my scenario:
a user logs in to the app and is brought to the first view controller.
In the first view controller in view did load, I am executing a query to get the current user, subscribing to the query and adding an observer to let me know when the data is synced:
let currentUserArr = realm.objects(DBUser.self).filter("id == %#", userId)
self.subscription = currentUserArr.subscribe(named: "current user")
self.subscriptionToken = self.subscription.observe(\.state, options: .initial) { state in
switch state {
case .creating:
print("creating")
case .pending:
print("pending")
case .complete:
print("complete")
self.artist = currentUserArr[0]
case .invalidated:
print("invalidated")
case .error(let err):
//seal.reject(err)
print(err)
}
}
This makes sense that if I check Realm Cloud, I have a new partial realm created with path as:
/db/__partial/DyeOy3OR4sNsqMi2OmDQQEzUa8F3/~7f11cf52
However, here is where my confusion starts. I log the user out. I log back in and again the code above executes. My thought would be that Realm would just reuse the partial already created, but instead it creates an entirely new partial.
/db/__partial/DyeOy3OR4sNsqMi2OmDQQEzUa8F3/~8bc7bc49
Is this by design or should I somehow be reusing partials rather than having a new one created every time a query is executed (even if it is executed by the same user)?
I have posted on Realm Forums as well:
https://forums.realm.io/t/realm-platform-realm-path-partial-s/2833
I don't believe I was actually logging the current sync user out. Upon further testing, once I did log out and log back in, the existing partial was re-used. This is a non-issue.

Get and Update Workflow data in Filenet

I am having a hard time figuring out how I will get the workflow data from Filenet. I tried using process engine and content engine but I am lost on where to look at. Should I use PE or CE? also what particular part in the API?
I can already get the list of object stores from CE. Also I can already get the list of search parameters are its data from the PE, but I am lost on how to get the workflow step properties and its data and possible update it thru JAVA API.
You need to query for the workitems using PE API. Assuming workitems are in a queue then,
VWQueueQuery vwQueueQuery = **yourqueue**.createQuery(java.lang.String indexName, java.lang.Object[] firstValues, java.lang.Object[] lastValues, int queryFlags, java.lang.String filter, java.lang.Object[] substitutionVars, int fetchType)
then
while (vwQueueQuery.hasNext()) {
vwStepElement = (VWStepElement) vwQueueQuery.next();
//lock if you want to modify the workitem
vwStepElement.doLock(true);
//once you have vwstepelement, there are different ways to get properties
String[] properties = vwStepElement.getParameterNames();//this will give you all the properties that are exposed for that queue.
//if you want to get a specific property then use
Object specificParameter = vwStepElement.getParameterValue("propName");
//then if you want to set a value
vwStepElement.setParameterValue(parameterName, parameterValue, compareValue);
//finally, if you want save and dispatch to next level
vwStepElement.setSelectedResponse(response);
vwStepElement.doSave(true);
vwStepElement.doDispatch();
}

C# code to increment by 1 an item in Application state

How to write the C# code to increment by 1 an item in Application state named “total” in ASP.net?
In order to modify any Application variables, you need to lock them before modifying it to ensure no inadvertent changes between parallel requests happen.
An example
Application.Lock();
var userCount = Convert.ToInt32(Application["OnlineUserCount"]);
Application["OnlineUserCount"] = ++userCount;
Application.UnLock();
Application.Lock ensures that only one thread or request has access to the variables and other requests wait in queue. You modify the values as per the need and Application.Unlock to release your lock so other requests can work on Application variables.
Please note that there may be a performance hit, if you depend on this!!
Note: A page does not need to lock the application object to edit the
application collection. If one page tries to edit the application
collection without locking and a second page also tries to edit the
collection, no error is sent by IIS and the Application object ends up
in an inconsistent state.
Better use a
static variable
and
Interlocked.Increment
like this:
private static int total= 0;
public static void Increment()
{
Interlocked.Increment(ref total);
}

PullAsync Silently fails

I'm using .Net backend Azure Mobile Services on a Windows Phone app. I've added the offline services that Azure Mobile Services provides to the code for using SQLite.
The app makes successful Push calls (I can see the data in my database and they exist in the local db created by Azure Mobile Offline Services).
In PullAsync calls, it makes the right call to the Service (The table controller) and Service calculates the results and returns multiple rows from the database. However, the results are lost on the way. My client App is getting an empty Json message (I double checked it with fiddler).
IMobileServiceSyncTable<Contact> _contactTable;
/* Table Initialization Code */
var contactQuery = _contactTable.Where(c => c.Owner == Service.UserId);
await _contactTable.PullAsync("c0", contactQuery);
var contacts = await _contactTable.Select(c => c).ToListAsync();
Any suggestion on how I can investigate this issue?
Update
The code above is using incremental sync by passing query ID of "c0". Passing null to PullAsync for the first argument disables incremental sync and makes it return all the rows, which is working as expected.
await _contactTable.PullAsync(null, contactQuery);
But I'm still not able to get incremental sync to return the rows when app is reinstalled.
I had a similar issue with this, I found that the problem was caused by me making different sync calls the same table.
In my App I have a list of local users, and I only want to pull down the details for the users I have locally.
So I was issuing this call in a loop
userTbl.PullAsync("Updating User Table", userTbl.CreateQuery().Where(x => x.Id == grp1.UserId || x.Id == LocalUser.Id)
What I founds is that by Adding the user Id to the call I over came the issue:
userTbl.PullAsync("User-" + grp1.UserId, userTbl.CreateQuery().Where(x => x.Id == grp1.UserId || x.Id == LocalUser.Id)
As a side note, depending on your id format, the string you supply to the query has to be less than 50 characters in length.
:-)
The .NET backend is set up to not send JSON for fields that have the default value (e.g., zero for integers). There's a bug with the interaction with the offline SDK. As a workaround, you should set up the default value handling in your WebConfig:
config.Formatters.JsonFormatter.SerializerSettings.DefaultValueHandling = Newtonsoft.Json.DefaultValueHandling.Include;
You should also try doing similar queries using the online SDK (use IMobileServiceTable instead of sync table)--that will help narrow down the problem.
This is a very useful tool for debugging the deserialisation: use a custom delegating handler in your MobileServiceClient instance.
public class MyHandler: DelegatingHandler
{
protected override async Task SendAsync(HttpRequestMessage message, CancellationToken token)
{
// Request happens here
var response = await base.SendAsync(request, cancellationToken);
// Read response content and try to deserialise here...
...
return response;
}
}
// In your mobile client code:
var client = new MobileServiceClient("https://xxx.azurewebsites.net", new MyHandler());
This helped me to solve my issues. See https://blogs.msdn.microsoft.com/appserviceteam/2016/06/16/adjusting-the-http-call-with-azure-mobile-apps/ for more details.

Resources