I'm using Application Insights in multi-component mode, all my telemetry goes to one AI resource. I have two web apps that run an in-browser client each.
In the application map, I correctly get all a rectangle for all the different parts of the system - except for the client business. All client activity shows as "Client: production": ("production" is the name of the AI resource.)
There are actually two such "Client: production" rectangles, each one visible only when selecting the respective server app. That sounds about right, but: If I click on related metrics, I get this filter:
That obviously contains metrics for all clients, not just the selected one. Is there no way to properly separate the two?
There is a current limitation of multi-component application map that neither Availability nor Client telemetry honor multi-component. We're working on fixing it.
Related
I have two application's running independently of one another. At present, ApplicationSend is web-based and ApplicationReceive is desktop-based, but tomorrow it might be the opposite. There are no Multiple UI.
The ApplicationSend keep logging info into the database or Text file. when the ApplicationSend logs info, it needs to display in ApplicationReceive at the same time without any delay.
The ApplicationReceive uses rest API to display the data.
I have two questions:
How this can be achieved in C#. Please note the application's are
completely unaware of each other. Also one application can be
Windows and other can be web.
When the REST API Display 10 logs, next time should it query for all
the logs or only the updated logs?. Ideally it should be only
updated logs. But how would one get the updated logs
I'm developing a keyboard, so I'm implementing an InputMethodService. I have a requirement to add other features to this keyboard application but to separate it as another application in order to leave the keyboard as a lone keyboard implementation.
So I need to create a keyboard application and another application with all the other features (other features include but not limited to: a News Activity, a Messenger, a Lock Screen implementation and some Widgets).
Those two applications will need to communicate between them, from my research I found that there are several mechanisms I could use:
A Bounded Service
URI implementation
BroadcastReceivers
My question is: what would be the best implementation for my needs? Where my needs are to pass data from one application to another as well as starts activities and other components from one app in another.
After I made some research on this topic I found that there are several ways to do this operation:
Using Bounded Services that uses either a Messenger object to pass messages between the local process and the Remote Bounded Service or using AIDL to create an interface that will be passed from the Remote Bounded Service to the local process so that they can communicate.
The second options would be using the good old fashion BroadcastReceivers. That way as always it is possible to fire an Intent from the local process to the remote process and there receive some information.
The different for the usage of those both two would be decided by how strong would you like the connection to be between the two processes and how often should they be communicating. If they need to do one operation once in a while the BroadcastReceivers would be a perfectly good solution. But if you need a more consistent connection the Bounded Service is the way to go.
We are applying unittests, integration tests and we are practicing test driven and behaviour driven development.
We are also monitoring our applications and servers from outside (with dedicated software in our network)
What is missing is some standard for a live monitoring inside the apllication.
I give an example:
There should be a cron-like process inside the application, that regularily checks some structural health inside our data structures
We need to monitor that users have done some regular stuff that does not endanger the health of the applications (there are some actions and input that we can not prevent them to do)
My question is, what is the correct name for this so I can further research in the literature. I did a lot of searching but I almosdt always find the xunit and bdd / integration test stuff that I already have.
So how is this called, what is the standard in professional application development, I would like to know if there is some standard structure like xunit, or could xunit libraries even bee used for it? I could not even find appropriate tagging for this question, so please if you read this and know some better tags, why not add them to this answer and remove the ones that don't fit.
I need this for applications written in python, erlang or javascript and those are mostly server side applications, web applications or daemons.
What we are already doing is that we created http gateway from inside the applications that report some stuff and this is monitored by the nagios infrastructure.
I have no problem rolling some cron-like controlled self health scheme inside the applications, but I am interested about knowing some professional standardized way of doing it.
I found this article, it already comes close: Link
It looks like you are asking about approaches how to monitor your application. In general, one can distinguish between active monitoring and passive monitoring.
In active monitoring, you create some artificial user load that would mimic real user behavior, and monitor your application based on these artificial responses from a non-existing user (active = you actively cause traffic to your application). Imagine that you have a web application which allows to get weather forecast for specific city. To have active monitoring, you will need to deploy another application that would call your web application with some predefined request ("get weather for Seattle") every N hours. If your application does not respond within the specified time interval, you will trigger alert based on that.
In passive monitoring, you observe real user behavior over time. You can use log parsing to get number of (un)successful requests/responses, or inject some code into your application that would update some values in database whenever successful or not successful response was returned (passive = you only check other users' traffic). Then, you can create graphs and check whether there is a significant deviation in user traffic. For example, if during the same time of the day one week ago your application served 1000 requests, and today you get only 200 requests, it may mean some problem with your software.
Say, for example, you are caching data within your ASP.NET web app that isn't often updated. You have another process running outside of the app which ocassionally updates this data, when you do this you would like the cached data to be cleared immediately so that the next request picks up the new data straight away.
The caching service is running in the context of your web app and not externally - what is a good method of calling into the web app to get it to update the cache?
You could of course, just hack a page or web service together called ClearTheCache that does it. This can then be called by your other process. Of course you don't want this process to be externally useable or visible on your web app, so perhaps you could then check that incoming requests to this page are calling localhost, if not throw a 404. Is this acceptable? Could this be spoofed at all (for instance if you used HttpApplication.Request.Url.Host)?
I can think of many different ways to go about this, mainly revolving around creating a page or web service and limiting requests to it somehow, but I'm not sure any are particularly elegant. Neither do I like the idea of the web app routinely polling out to another service to check if it needs to execute something, I'd really like a PUSH solution.
Note: The caching scenario is just an example, I could use out-of-process caching here if needed. The question is really concentrating on invoking code, for any given reason, within a web app externally but in a controlled context.
Don't worry about the limiting to localhost, you may want to push from a different server in future. Instead share a key (asymmetrical or symmetrical doesn't really matter) between the two, have the PUSH service encrypt a block of data (control data for example) and have the receiver decrypt. If the block decrypts correctly and the data is readable you can safely assume that only the service that was supposed to call you has and you can perform the required actions! Not the neatest solution, but allows you to scale beyond a single server.
EDIT
Having said that an asymmetrical key would be better, have the PUSH service hold the private part and the website the public part.
EDIT 2
Have the PUSH service put the date/time it generated the cipher text into the data block, then the client can be sure that a replay attack hasn't taken place by ensuring the date/time is within an acceptable time period (say a minute).
Consider an external caching mechanism like EL's caching block, which would be available to both the web and the service, or a file to cache data to.
HTH.
I’ve been asked if we can optionally “single-instance” our web portal. See
this post on Hanselman's blog for the same idea in a WinForms app.
Suppose we have 2 shortcuts on the same client machine:
http://MyServer/MyWebPortal/Default.aspx?user=username&document=Foo
http://MyServer/MyWebPortal/Default.aspx?user=username&document=Bar
Clicking on the first shortcut would launch our web portal, log in, and display the document “Foo”. Clicking on the second shortcut should display the document “Bar” in the running instance of the web portal.
My current approach is this: In the Page Load, for the first instance create a per-client Application variable. The second instance looks for the Application variable to see if the portal is running on the client. If it is, the second URL is recorded in another Application variable and the second instance is forcibly exited. I’ve tried creating a ASP.Net AJAX Timer to poll the Application variable for a document to display. This sort of works. In order to respond quickly to the second request I’ve set the Timer interval to 2 seconds. This makes the portal irritating to use because of the frequent postbacks.
Using my approach, is there a way for the second instance to notify the first instance to check the application variable without polling? Is there a better overall approach to this problem?
Thanks in advance
There is no way on the server side to control which browser instance your page opens up on the client. You can't force all requests to open in the same browser window.
Also, an Application scope variable is shared by all users of your application. At least make this a Session-scope variable - otherwise you would only be allowing one user to access your portal at a time!
Honestly this sounds like a silly request from someone who a) probably doesn't understand how these types of things work and b) is trying to do an end-around for users who aren't that bright and actually see a problem with having more than one instance of your portal open.