I'm wondering how Application Insights infers the client's device model because the Microsoft docs do not state what method is used. Obviously, they must pull it from the client's user agent string, but do they use some sophisticated ML algorithm to classify the device model? Or do they simply apply some regex based logic?
I'm asking because I'm not sure how reliable this information is and I'm considering to use it as normalized input for an own multi-class classifier to categorize the user agents into four classes (mobile, desktop, tablet, unknown).
Application Insights uses an OSS component called UA-Parser. This uses RegEx to parse the UserAgent string to derive the device information used for device.model. The parsing occurs in Application Insight's ingestion service.
The GitHub project looks like this one:
https://github.com/ua-parser/uap-ref-impl
Related
I'm using Node.js and ws for my WebSocket servers and want to know the best practice methods of tracking connections and incoming and outgoing messages with Azure Azure Application Insights.
It appears as though this service is really only designed for HTTP requests and responses so would I be fine if I tracked everything as an event? I'm currently passing the JSON.parse'd connection message values.
What to do here really depends on the semantics of your websocket operations. You will have to track these manually since the Application Insights SDK can't infer the semantics to map to Request/Dependency/Event/Trace the same way it can for HTTP. The method names in the API do indeed make this unclear for non-HTTP, but it becomes clearer if you map the methods to the telemetry schema generated and what those item types actually represent.
If you would consider a receiving a socket message to be semantically beginning an "operation" that would trigger dependencies in your code, you should use trackRequest to record this information. This will populate the information in the most useful way for you to take advantage of the UI in the Azure Portal (eg. response time analysis in the Performance blade or failure rate analysis in the Failures blade). Because this request isn't HTTP, you'll have to mend your data to fit the schema a bit. An example:
client.trackRequest({name:"WS Event (low cardinality name)", url:"WS Event (high cardinality name)", duration:309, resultCode:200, success:true});
In this example, use the name field to describe that items that share this name are related and should be grouped in the UI. Use the url field as information that more completely describes the operation (like GET parameters would in HTTP). For example, name might be "SendInstantMessage" and url might be "SendInstantMessage/user:Bob".
In the same way, if you consider sending a socket message to be request for information from your app, and has meaningful impact how your "operation" acts, you should use trackDependency to record this information. Much like above, doing this will populate the data in most useful way to take advantage of the Portal UI (Application Map in this case would then be able to show you % of failed Websocket calls)
If you find you're using websockets in a way that doesn't really fit into these, tracking as an event as you are now would be the correct use of the API.
Recently, I was implementing 2FA using TOTP according to RFC 6238. What caught my attention were the default values: 30s time step, epoch as the start time of counting, and especially the widely used parameters (not directly recommended by the RFC): secret represented in Base32, codes of lengths 6 and HMAC-SHA1 as the underlying algorithm. My questions:
Is it reasonable to assume changes in widely used implementations, using the parameters above? This implies implementing a way to customize the parameters instead of hard coding the default values.
Are there any known plans to "upgrade" the used parameters by widely used client implementations, e.g. Authy, 1Password, Google Authenticator etc.?
Answer on the first question depends on your needs. If you have implemented 2FA on your server and looking for some app to generate codes on a client side - you just need to choose an app which already supports working with different parameters, so in this way you can be sure that next app update won't broke your auth system.
As for the common realization: most of the auth servers use 6-digits codes using 32-symbols seeds in Base32 and SHA1 as a hashing function, but I've met some systems with SHA-256 and 52-symbols seeds.
I am trying to build an application with Microsoft Cognitive Speaker Identification Service. But when I check it using its api some audio are not recognized correctly. I would like to know how much is the accuracy level of the service. Is there any way to improve it.
There are various things that can affect the accuracy of the identification e.g. Noise level, microphone quality, echo, etc.
To improve the performance in your condition, you can make sure the enrollment audio is recorded in the same conditions as the test audio (e.g. same microphone) and try to ensure that recording is done in a quiet environment.
It does work across multiple users and tried on different PCs/microphones.
I'd make sure that:
It is in a quiet room/environment
You are sending the audio correctly... (it is just byte array data, no additional encoding.)
Also check the header MediaTypeHeaderValue/content type, all request seem to be 'application/json' even though we send wav files.
Take care when mapping your users to the azure guids, and make sure you are using the correct ones. If you are using the SDK rather than API for profile creation and enrollment, there's no retreival of profile by id at the moment, I have done a workaround, to recreate the profile , and update the id in a database just before Enrollment. (the API doesn't need this though)
Also make sure you are using the latest API, (urls ending .../speaker/verification/v2.0/ etc... Some of the text independent features in the SDK are V2 only, and can fail verification becuse V2 stores profiles in 3 separate locations depending on the verification method.
Also check the profile was created/enrolled using the same verification method you are using to verify. try with a new profile if unsure.
I'm developing a keyboard, so I'm implementing an InputMethodService. I have a requirement to add other features to this keyboard application but to separate it as another application in order to leave the keyboard as a lone keyboard implementation.
So I need to create a keyboard application and another application with all the other features (other features include but not limited to: a News Activity, a Messenger, a Lock Screen implementation and some Widgets).
Those two applications will need to communicate between them, from my research I found that there are several mechanisms I could use:
A Bounded Service
URI implementation
BroadcastReceivers
My question is: what would be the best implementation for my needs? Where my needs are to pass data from one application to another as well as starts activities and other components from one app in another.
After I made some research on this topic I found that there are several ways to do this operation:
Using Bounded Services that uses either a Messenger object to pass messages between the local process and the Remote Bounded Service or using AIDL to create an interface that will be passed from the Remote Bounded Service to the local process so that they can communicate.
The second options would be using the good old fashion BroadcastReceivers. That way as always it is possible to fire an Intent from the local process to the remote process and there receive some information.
The different for the usage of those both two would be decided by how strong would you like the connection to be between the two processes and how often should they be communicating. If they need to do one operation once in a while the BroadcastReceivers would be a perfectly good solution. But if you need a more consistent connection the Bounded Service is the way to go.
We are applying unittests, integration tests and we are practicing test driven and behaviour driven development.
We are also monitoring our applications and servers from outside (with dedicated software in our network)
What is missing is some standard for a live monitoring inside the apllication.
I give an example:
There should be a cron-like process inside the application, that regularily checks some structural health inside our data structures
We need to monitor that users have done some regular stuff that does not endanger the health of the applications (there are some actions and input that we can not prevent them to do)
My question is, what is the correct name for this so I can further research in the literature. I did a lot of searching but I almosdt always find the xunit and bdd / integration test stuff that I already have.
So how is this called, what is the standard in professional application development, I would like to know if there is some standard structure like xunit, or could xunit libraries even bee used for it? I could not even find appropriate tagging for this question, so please if you read this and know some better tags, why not add them to this answer and remove the ones that don't fit.
I need this for applications written in python, erlang or javascript and those are mostly server side applications, web applications or daemons.
What we are already doing is that we created http gateway from inside the applications that report some stuff and this is monitored by the nagios infrastructure.
I have no problem rolling some cron-like controlled self health scheme inside the applications, but I am interested about knowing some professional standardized way of doing it.
I found this article, it already comes close: Link
It looks like you are asking about approaches how to monitor your application. In general, one can distinguish between active monitoring and passive monitoring.
In active monitoring, you create some artificial user load that would mimic real user behavior, and monitor your application based on these artificial responses from a non-existing user (active = you actively cause traffic to your application). Imagine that you have a web application which allows to get weather forecast for specific city. To have active monitoring, you will need to deploy another application that would call your web application with some predefined request ("get weather for Seattle") every N hours. If your application does not respond within the specified time interval, you will trigger alert based on that.
In passive monitoring, you observe real user behavior over time. You can use log parsing to get number of (un)successful requests/responses, or inject some code into your application that would update some values in database whenever successful or not successful response was returned (passive = you only check other users' traffic). Then, you can create graphs and check whether there is a significant deviation in user traffic. For example, if during the same time of the day one week ago your application served 1000 requests, and today you get only 200 requests, it may mean some problem with your software.