viewing data from Exceptions in ExceptionTelemetry - azure-application-insights

After logging ExceptionTelemetry, is there a way to see the Exception.Data content in the logs? I am using Exception.Data to capture informative data about the environment when the exception occured.
It appears that ExceptionTelemetry has its own kvp for properties and metrics.
If not supported, then the plan is to wrap ExceptionTelemetry in code that walk through the exception/innerexception tree and add any data KVP entries it finds to these dictionaries. I was hoping not to have to do that myself...

the AI sdk does not walk through the Exception.Data structure itself. the items in exception data could be any type, complex structures, etc, and could literally contain anything.
The AI SDK allows you to send custom properties (strings), of limited size and custom metrics (doubles). There are also some limitations on the number of distinctly named properties for the lifetime of the AI application (though these limitations are always changing).
So you might not want to walk through and send ALL data in exception data, you might want to grab only the things you know you'll need, in order not waste custom properties?

Related

How to handle Complex Object in the ngrx?

Hello,
I'm working on a project in which the main part of the data has a complex structure as you can see in the above picture.
Now, the object, in reality, is much complex than that but what I showed it servers the purpose.
Because in DB they are linked together in tables relationship the first time when the website is launched, after log in, a list of projects will come together with some small details of technology and dataObjects.
I created separated action and effects files but everything is handled by a single reducer. What I mean is at the start, the list of projects will be saved on a state, than any other actions like Create a project, technology, data object, edit, delete has to perform actions over the same state "projects-state".
For example besides technologyAPIS will be another 3-4 technologies, inside each technology object will be another list of objects.
The issue here is that the reducer file is getting bigger and bigger when it handles all kinds of actions that will perform actions over the specific data from the state. It is important that the chain of Objects stay together.
My question is, is this a bad approach? it can be handled in a different way? I know I can create a reducer for each entity (project, technology, data app) but I will lose that relationship between them, where one belongs to the other?
Thank you so much for your feature response
I've only been doing reactive/NGRX for a few months now, but from my understanding, thats defiantly a bad approach. It should still work, but may be a hassle to debug/maintain.
Ngrx seems to be promoting 'Normalising' data, just like the usual Relational Database concept.
You could break down your project-state into smaller states, with relational keys in them.
Example
Projects state (not to be confused with project-state)
id:number
projectDescription:string
createdBy:string
techAPIs:number[] //where the content here is the id of the TechApi
TechApi state
id:number
otherInfo:any
And then when you need to access the TechApi state for a project, you retrieve it and filter/map it in a selector.
This is a some what General Example if my explanation is not understandable.

Using firebase tree structure to represent a "document outline" structure directly

How good/stupid would it be to use Firebase tree structure to directly represent a user-facing tree structure, like a "document outline" in "word processors"?
As opposed to e.g. doing an SQL-join parent-child type of relationship and then building the tree via a projection (which would probably be slow).
I know that there is a limit of 32 levels of nesting ( https://www.firebase.com/docs/web/guide/understanding-data.html ), which should be enough, as I cannot imagine a sane user wanting to do as many levels of nesting for a textual tree-outline...
Although maybe I need to divide 32 by two, because of each node needing to have sub nodes for its children and metadata, right?
I know that once a tree node is accessed via Firebase API, then all sub-nodes need to be fetched, which could be a performance problem if the user has a lot of data, but in the end I think this would not be a problem, since the data would mostly be a user-entered plaintext (short).
A performance problem could arise if the user pastes some very long chunks of text copied from somewhere (e.g. tens of kilobytes). But then I could separate those "TLOB-s" via a kind of "symlink" in firebase and fetch them on-demand from a different node, right? Same should apply for separating images and other heavy objects, right?
Although in a prototype and early stages, this should probably be ignored, for the sake of simplicity...
I could probably put in place a generic approach to "symlinking", to overcome the 32 levels limitation and the need to fetch all sub-nodes at once, right? Is there some best-practices approach for that (e.g. syntax for a firebase node which would symbolise a link to another node) ?
I have extracted the "symlinking" idea to a separate question: Firebase "symlink" to another node .
I could probably partition the topmost nodes into some kinds of projects/categories to prevent having to fetch absolutely everything the user has ever had...
Is my reasoning/approach correct?
Is there any consideration that I did not think of, e.g. innate limits on data size or performance or e.g. security rules?
Would I be better served by other technologies like Couchbase/Pouchbase ?
Further details: this is for a hybrid mobile app with some emphasis also on web access and offline access. I hope to do most of the logic in Javascript. The UI part of the question is here: HTML tree for hybrid mobile app .

StatsD/Graphite Naming Conventions for Metrics

I'm beginning the process of instrumenting a web application, and using StatsD to gather as many relevant metrics as possible. For instance, here are a few examples of the high-level metric names I'm currently using:
http.responseTime
http.status.4xx
http.status.5xx
view.renderTime
oauth.begin.facebook
oauth.complete.facebook
oauth.time.facebook
users.active
...and there are many, many more. What I'm grappling with right now is establishing a consistent hierarchy and set of naming conventions for the various metrics, so that the current ones make sense and that there are logical buckets within which to add future metrics.
My question is two fold:
What relevant metrics are you gathering that you have found indespensible?
What naming structure are you using to categorize metrics?
This is a question that has no definitive answer but here's how we do it at Datadog (we are a hosted monitoring service so we tend to obsess over these things).
1. Which metrics are indispensable? It depends on the beholder. But at a high-level, for each team, any metric that is as close to their goals as possible (which may not be the easiest to gather).
System metrics (e.g. system load, memory etc.) are trivial to gather but seldom actionable because they are too hard to reliably connect them to a probable cause.
On the other hand number of completed product tours matter to anyone tasked with making sure new users are happy from the first minute they use the product. StatsD makes this kind of stuff trivially easy to collect.
We have also found that the core set of key metrics for any teamchanges as the product evolves so there is a continuous editorial process.
Which in turn means that anyone in the company needs to be able to pick and choose which metrics matter to them. No permissions asked, no friction to get to the data.
2. Naming structure The highest level of hierarchy is the product line or the process. Our web frontend is internally called dogweb so all the metrics from that component are prefixed with dogweb.. The next level of hierarchy is the sub-component, e.g. dogweb.db., dogweb.http., etc.
The last level of hierarchy is the thing being measured (e.g. renderTime or responseTime).
The unresolved issue in graphite is the encoding of metric metadata in the metric name (and selection using *, e.g. dogweb.http.browser.*.renderTime) It's clever but can get in the way.
We ended up implementing explicit metadata in our data model, but this is not in statsd/graphite so I will leave the details out. If you want to know more, contact me directly.

Biztalk message agnostic orchestration

After moving away from Biztalk since BT2006, we're looking at bringing it back into the organization. One of the frustrations I had early on was when dealing wht HL7 and orchestrations, we needed to have a seperate orchestration for each ADT message type, even though the schema for each type is essentially the same, and each orchestration did exactly the same thing. Moving forward into the world of BizTalk 2010, has anything improved here? Is there a pattern I can utilize to use a single orchestration for all ADT types?
HL7 messaging in BizTalk has remained roughly unchanged since the 2006 release. Because BizTalk defines a schema for each message and event type (e.g. ADT^A01, ADT^A03, ADT^A08) and not just for each message type (e.g. ADT, BAR, MDM), your mapping and orchestrations quickly become a mess.
Here is what I have done in the past to get around this limitation:
Allow messages to come in untyped to the orchestration. That is set the MessageType = System.Xml.XmlDocument. I found that generally, I am only interested in parsing out or updating a few elements, so I would just write a helper library with a few generic linq statements to get to the data that I needed. This way, I could write a linq statement that gets to PID-3 (Patient Id Number) and I would be able to use it consistently over any message or event type because PID remains the same. Likewise, I would use the same technique to update the message as well. This technique does not work great if there are large structural differences in the fields that you are looking to update or if you are looking to read/update a large amount of data.
Create master/canonical HL7 message type schemas. This takes a bit more work, but depending on how many message types you are looking to process, this can really pay off and is more consistent with how healthcare organizations think of their HL7 interfaces. In order to do this, you would need to define a new schema for a message type and include all possible segments for this message. So, instead of having multiple ADT types defined, you would roll all the possible variations for A01, A03, A04, etc. under one master schema. This will allow you greatly reduce the amount of mapping and parsing logic needed. Unfortunately, since this is not the HL7 accelerator's default behavior and will require some custom pipelines and orchestration logic to achieve. Basically, you will need to modify some properties to get the Accelerator to think that your new master message is valid.
For mostly pass-through interfaces, I would recommend technique #1. Otherwise, if you will be generating or needing to consume basically any message event in a canonical fashion, technique #2 can pay off in the long run.
As I see it you have two possibilities here.
Treat the message as anonymous. This means your message is "un-typed" (you declare it as a System.Xml.XmlDocument type). Then your orchestration can interrogate the message to decide what type it is.
Create an envelope message whose body can be all of your possible message types (using the xsd choice group selector). Your orchestration then handles the envelope message type. With this approach you can declare the type contained in the body of the envelope by setting a value in the envelope header.
I would prefer the second one; while it is certainly more work (you need to wrap all your inbound messages in an envelope) it allows you to understand the what the message is by just looking at the envelope header. This means you can still route by message type if you need to.

How to realize persistence of a complex graph with an Object Database?

I have several graphs. The breadth and depth of each graph can vary and will undergo changes and alterations during runtime. See example graph.
There is a root node to get a hold on the whole graph (i.e. tree). A node can have several children and each child serves a special purpose. Furthermore a node can access all its direct children in order to retrieve certain informations. On the other hand a child node may not be aware of its own parent node, nor other siblings. Nothing spectacular so far.
Storing each graph and updating it with an object database (in this case DB4O) looks pretty straightforward. I could have used a relational database to accomplish data persistence (including database triggers, etc.) but I wanted to realize it with an object database instead.
There is one peculiar thing with my graphs. See another example graph.
To properly perform calculations some nodes require informations from other nodes. These other nodes may be siblings, children/grandchildren or related in some other kind. In this case a specific node knows the other relevant nodes as well (and thus can get the required informations directly from them). For the sake of simplicity the first image didn't show all potential connections.
If one node has a change of state (e.g. triggered by an internal timer or triggered by some other node) it will inform other nodes (interested obsevers, see also observer pattern) about the change. Each informed node will then take appropriate actions to update its own state (and in turn inform other observers as needed). A root node will not know about every change that occurs, since only the involved nodes will know that something has changed. If such a chain of events is triggered by the root node then of course it's not much of an issue.
The aim is to assure data persistence with an object database. Data in memory should be in sync with data stored within the database. What adds to the complexity is the fact that the graphs don't consist of simple (and stupid) data nodes, but that lots of functionality is integrated in each node (i.e. events that trigger state changes throughout a graph).
I have several rough ideas on how to cope with the presented issue (e.g. (1) stronger separation of data and functionality or (2) stronger integration of the database or (3) set an arbitrary time interval to update data and accept that data may be out of synch for a period of time). I'm looking for some more input and options concerning such a key issue (which will definitely leave significant footprints on a concrete implementation).
(edited)
There is another aspect I forgot to mention. A graph should not reside all the time in memory. Graphs that are not needed will be only present in the database and thus put in a state of suspension. This is another issue which needs consideration. While in suspension the update mechanisms will probably be put to sleep as well and this is not intended.
In the case of db4o check out "transparent activation" to automatically load objects on demand as you traverse the graph (this way the graph doesn't have to be all in memory) and check out "transparent persistence" to allow each node to persist itself after a state change.
http://www.gamlor.info/wordpress/2009/12/db4o-transparent-persistence/
Moreover you can use db4o "callbacks" to trigger custom behavior during db4o operations.
HTH
German
What's the exact question? Here a few comments:
As #German already mentioned: For complex object graphs you probably want to use transparent persistence.
Also as #German mentione: Callback can help you to do additional stuff when objects are read/written etc on the database.
To the Observer-Pattern. Are you on .NET or Java? Usually you don't want to store the observers in the database, since the observers are usually some parts of your business-logic, GUI etc. On .NET events are automatically not stored. On Java make sure that you mark the field holding the observer-references as transient.
In case you actually want to store observers, for example because they are just other elements in your object-graph. On .NET, you cannot store delegates / closures. So you need to introduce a interface for calling the observer. On Java: Often we use anonymous inner classes as listener: While db4o can store those, I would NOT recommend that. Because a anonymous inner class gets generated name which can change. Then db4o will not find that class later if you've changed your code.
Thats it. Ask more detailed questions if you want to know more.

Resources