Application Insights - Scheduled Analytics - azure-application-insights

Where did the scheduled analytics go in Azure portal? I had production alerts configured in with this and it suddenly disappeared as an option. I have no idea if the alert is still monitoring the services?

Monitors are still running. You can still access them turning this feature on using the following URL https://portal.azure.com/?feature.scheduledanalytics=true.
Note, Scheduled Analytics is in private preview and hence not recommended for live-site monitoring; as it’s a preview functionality not backed by SLA. We strongly recommend to migrate to Log Alerts for Application Insights instead which is General Availability with SLA and recommended for production use; with similar functionality (i.e.) monitoring using periodic Analytics query execution.
Log Alerts: https://azure.microsoft.com/en-us/updates/log-alerts-for-application-insights-general-availability/

Related

Google Scope Authorizations Loop Endlessly When Previewing or Publishing Apps with Cloud SQL Database

My organization set up Cloud SQL as the default for Google App Maker about one month ago. In the last week, we have been unable to preview or publish apps that use Cloud SQL data sources, including the sample applications which worked perfectly before. The failure occurs during the authorization process. When previewing or publishing an app, Google App Maker displays a dialog stating "Deploying this app requires authorization". Next it prompts the user for their Google account and then requests approval for the necessary authorizations (e.g., "Manage the data in your Google SQL Service instances"). After approving the authorization, the prompts to authorize begin over with the dialog stating "Deploying this app requires authorization".
Observations:
We have repeated this problem on multiple different computers, networks, and four different user accounts.
In the SQL cloud console, our Cloud SQL instance shows new databases being created for each app along with new database-specific user accounts
All of the databases appear as expected when I log directly into the Cloud SQL database using phpMyAdmin
Other apps which don't use a Cloud SQL datasource work fine, including an app that uses a calculated data source which is hosted in the same Cloud SQL instance
The only errors in the Stack driver logs for the Cloud SQL database showed "INFO" level communication errors with the database (aborted connection...Got an error reading communication packets)
I'm unable to find Stack driver logs for the apps because I cannot preview or publish them (either option would provide a link to the Stack driver logs)
There are now approximately 20 databases in our SQL instance (mostly associated with simple app tests) and we have only used 1 GB of 10 GB of space in our SQL instance
I haven't seen any related problems on the Google Issue Tracker for Google App Maker
I'd appreciate any help or suggestions on what to check in order to resolve this issue.
I posted an issue to Google Issue Tracker and Google corrected the problem. They also provided a workaround if this problem happens again.
Here is the response from the Google development team posted on Google Issue Tracker: https://issuetracker.google.com/issues/145345198
It's great to hear your up and working again! We are aware of this issue and are working through a longer term fix. The specific bug appears to be related to some changes made in the Google Cloud session policy control that may have rolled out to your domain recently interacting with AppMaker in a way that was not expected. We've spent time diagnosing the underlying issue and we beleive we know the root cause. I suspect your domain admin did a version of the workaround below.
Without getting too far into the details, the specific bug is that for a Deployer of an AppMaker application, if the Google Cloud Session policy is set with any expiration time, the returned token AppMaker sees is invalid, triggering a loop in AppMaker trying to generate a valid security token. Historically, these session tokens never expired but recently there was beta feature launch that allowed domain admins to set them to expire. We strongly suspect your domain recently set this expiration policy explicitly and that's what is causing the bug.
The good news is that these policies are overridable per Organizational Unit and we have tested that OUs which have the original classic Never Expire setting do, in fact, allow AppMaker to work.
My suspicion is that your domain admin has reverted recent, local changes to your organizational policy under the admin.google.com console, specifically under Security > Google Cloud session control (Beta).
If this happens again, here the workaround we would recommend. Note you don't need to do this if you're currently up and working. You will need the help of someone with admin.gogole.com powers, specifically User and Organizational Unit powers at your organization. It is a slight increase in security risk but it restores some classic behavior that was standard until recently.
The summary of the workaround is to override the Google Cloud session control expiration setting such that individuals who need access to AppMaker deployments can have it. To mitigate systemic security risk, this is best done by creating a limited purpose Organizational Unit with just that setting different than the parent OU settings.
The workaround is to:
Contact someone in your domain with Admin powers for your Google for Business license.
Have your admin proceed to https://admin.google.com. The actions below need to be performed by a domain admin.
Under the Users section, identify the specific user account that needs the ability to deploy AppMaker Apps.
Identify the Organizational Unit of that Appmaker dev user and make a note of it.
Under the Organization Units settings, locate the Organization Unit you identified above.
Create a new Organization Unit underneath that user's current Organizational Unit with some descriptive identifying it as special w.r.t AppMaker. So for Developers, make something like DevelopersWhoAreAlsoAppMakerDevs.
Back under the Users tab, locate the user from step 3. Move this user into the new Organizational Unit you've just created. This change can take a while to propagate.
-Interlude- At this point, you've made a new Organizational Unit for just that individual and added them to it. You can certainly add multiple people to that OU, especially if they're already in the same parent OU. Use your discretion as to what amount of Organizational rework you wish to pursue. You may not be using OUs at all or you may decide to just turn off this control for the whole domain. It's up to you.
Under admin.google.com's Security settings, locate the Google Cloud session control (beta) settings.
Under this panel, from the dropdown menu on the left, locate the Organization Unit you just created.
Be sure to select ONLY the OU you intend to change.
Change the "Google Cloud Console and Google Cloud SDK session control" from expiring to "Session Never Expires".
Save your changes.
The account you selected in step 3 should now be able to deploy AppMaker apps.
It appears this OU change is only necessary for the deployer of an AppMaker app, not an individual user. Note also that if you have multiple AppMaker developers who all have different current OU settings, you may need to create multiple daughter OUs to avoid a sudden radical shift in OU settings for an individual account.

Audit logging CosmosDB

Wanting to validate my ARM template was deployed ok and to get an understanding of the telemetry options...
Under what circumstances do the following get logged to Log Analytics?
DataPlaneRequests
MongoRequests
QueryRuntimeStatistics
Metrics
From what I can tell arduously in the last few days connecting in different ways.
DataPlaneRequests are logged for:
SQL API calls
Table API calls even when the account was setup for SQL API
Graph API calls against an account setup for Graph API
Table API calls against an account setup for Table API
MongoRequests are logged for:
Mongo requests even when the account was setup for SQL API
However I haven't been able to see anything for QueryRuntimeStastics (even when turning on PopulateQueryMetrics) nor have I seen any AzureMetrics appear?
Thanks Alex for spending time and trying out different options of logging for Azure Cosmos DB.
There are primarily two types of monitoring paths for Azure Cosmos DB.
Metrics: These are low latency (<5 min) and aggregated metrics which are exposed on Azure Monitor API for consumption. THese metrics are primarily used for diagnosis of the app for any live site issues.
Logs: These are raw request logs coming at 2hours+ latency and are used for customer for primarily audit scenarios to understand who accessed the data.
Depending on your need you can choose either of the approaches.
DataPlaneRequests by default shows all the requests across all the API's and Mongo Requests only show Mongo specific calls. Please note Mongo requests would also be seen in Data Plane requests.
Metrics would not be see in Log Analytics due to a knowwn which our partner team is fixing.
Let me know if you have any further questions here.

Creating Stackdriver alerting policies via Monitoring v3 API

Using the Stackdriver v3 monitoring api, I can create monitoring groups, uptime checks, to include uptime checks based on the created monitoring groups, alerting policies, and uptime check alerting policies. The policies trigger as expected and I receive the configured notifications (notifications are configured manually via console UI).
I am using a combination of the API Explorer for REST methods and scripted, gcloud commands. The alerting policies are created using JSON files.
All is well except when viewing the Uptime Checks Overview UI > Policies (the little blue bell) from the Stackdriver console, every alerting policy created via the API has a grey bell, i.e. no association with an alerting policy (although all policies function as expected).
I’ve been at this for a while and I am out of ideas as to why. Has anyone observed this or possibly have any idea where the problem would be?
I tried to reproduce the issue again. However, I couldn't. The Bell sign in Uptime Check UI seems to work as expected now. Thus, it seems the issue is fixed. You can follow its public tracker here.

Is Azure Cloud Service Worker role the only Azure hosting option for running an EventHub EventProcessor?

I'm currently fighting my way through Event Hubs and EventProcessorHost. All guidance I found so far suggests running an EventProcessor in an Azure Cloud Service worker role. Since those are very slow to deploy and update I was wondering if there is any Azure service that lets me run an EventProcessor in a more agile environment?
So far my rough architecture looks like this
Device > IoT Hub > Stream Analytics Job > Event Hub > [MyEventProcessor] > SignalR > Clients...
Or maybe there is another way of getting from Steam Analytics to fire SignalR messages?
Any recommendations are highly appreciated.
Thanks, Philipp
You may use Azure Web App service with the SignalR enabled and merge your pipeline "steps" [MyEventProcessor] and SignalR into one step.
I have done that a few times, started from the simple SignalR chat demo and added the Event Hub receiver functionality to the SignalR processing. That article is close to what i mean in terms of approach.
You may take a look at Azure WebJobs as well. Basically, it can work as a background service doing your logic. WebJobs SDK has the support of Event Hub.
You can run an EventProcessorHost in any Azure thing that will run arbitrary C# code and will keep running. The options for where you should run it end up depending on how much you want to spend and what you need. So Azure Container Service may be the new fancy deployment system, but it's minimum cost may not be suitable for you. I'm running my binaries that read data from EventHubs on normal Azure Virtual Machines with our deployment system in charge of managing them.
If your front end processes using SignalR to talk to clients have a process that stays around for a while, you could just make each one of those their own logical consumer (consumer group) and have them consume the entire stream. Or even if they don't stay around (ie you're using an Azure hosting option that turns off the process when idle) you could write your receiver to just start at the end of stream (as opposed to reprocessing older data), if that's what your scenario requires.

Google Calendar API - Appropriate API Key Solution to Exceeded Quotas

I have a custom Windows service developed in C#.NET that synchronizes users' Google calendars with an internal calendar.
Per the Google Calendar API documentation, I'm using the below code. I believe this is referred to as the ClientLogin method which may or may not be advised (I've found conflicting information in the Google documentation).
CalendarService service = new CalendarService("Your app name");
service.setUserCredentials("username", "password");
This worked fine in testing. Now that things have moved to production, I'm receiving errors such as "The user has exceeded their quota, and cannot currently perform this operation" and "User has modified too many events today. Please try again tomorrow." This began more than a day ago and has remained as such.
I've researched this considerably and am still confused on a few points. Any help would be greatly appreciated.
What is the daily quota per user?
Are the (really low?) quotas there because an API key isn't being used by my application?
If I were to use an API key, which approach would I use for a Windows service in which I have the usernames and passwords for the Google users? - Simple API, OAuth2, Service Account, etc.
FYI: I am using the API .NET library provided by Google. If I should be using a particular authentication approach, I would appreciate a sample illustrating the implementation using the .NET library provided via Google.
First of all you definitely don't use the latest version of the library. You can download it from NuGet. You should download the following two packages:
https://www.nuget.org/packages/Google.Apis.Calendar.v3/
https://www.nuget.org/packages/Google.Apis.Authentication/ (be aware that in the next release we are going to improve the OAuth2 flows significantly, and support WP, Windows 8 application).
Regarding your questions:
1-2) Calendar API supports 100,000 requests/day. You can find that information in the Google API Console in the services tab.
3) Definitely OAuth2. Read more here and here.
You can find code samples with the current implementation of OAuth2 in our samples repository (https://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples)

Resources