Creating Stackdriver alerting policies via Monitoring v3 API - stackdriver

Using the Stackdriver v3 monitoring api, I can create monitoring groups, uptime checks, to include uptime checks based on the created monitoring groups, alerting policies, and uptime check alerting policies. The policies trigger as expected and I receive the configured notifications (notifications are configured manually via console UI).
I am using a combination of the API Explorer for REST methods and scripted, gcloud commands. The alerting policies are created using JSON files.
All is well except when viewing the Uptime Checks Overview UI > Policies (the little blue bell) from the Stackdriver console, every alerting policy created via the API has a grey bell, i.e. no association with an alerting policy (although all policies function as expected).
I’ve been at this for a while and I am out of ideas as to why. Has anyone observed this or possibly have any idea where the problem would be?

I tried to reproduce the issue again. However, I couldn't. The Bell sign in Uptime Check UI seems to work as expected now. Thus, it seems the issue is fixed. You can follow its public tracker here.

Related

Application Insights shows GET calls while my code does not have any such calls being made

We have integrated Azure Application insights with our bot built using Azure bot framework using node.JS and typescript. Everything looks fine and we can see telemetry data flowing in.
In the failures section, we can see Operation name "GET /api/messages" showing repeated times - one failed call (405) and one success call (200).
But we have no GET operation being done on "/api/messages" in our code. We only have "POST" operations.
We are unable to understand why telemetry shows GET operation and one as failed and one as success.
Any help is appreciated.
The operation_SyntheticSource field of request telemetry is often used by microsoft / azure to indicate traffic that is generated by infrastructure or bots. Examples are health requests, keep alive traffic, spider bots.
There are options to filter out telemetry, so it is possible to filter out telemetry cause by synthetic traffic. See the docs.
Telemetry processors can be configured using DI.

Googlesheets quota limit issues - possible failure to use API key

We are currently using google sheets for a research project on crowd forecasts for Covid-19 case and death numbers.
Google Sheets is used for convenience, but we are often running into quota limit issues - even though the number of users we have should be well below what Google allows.
I attempted to create a somewhat reproducible example by setting up a new google account and creating a sheet from which to read.
The first thing I tried (without making any changes to the google account) is this:
library(googledrive)
library(googlesheets4)
# Google sheets authentification -----------------------------------------------
options(gargle_oauth_cache = ".secrets")
drive_auth(cache = ".secrets", email = "iamatestotest#gmail.com")
gs4_auth(token = drive_token())
sheet_id <- "1Z2O5Mce_haceWfduLenJQP-hddXF9biY_4Ydob_psyQ"
n_tries <- 50
for (i in 1:n_tries) {
data <- read_sheet(ss = sheet_id)
Sys.sleep(0.5)
print(i)
}
From what I understand I should be able to make around 300 read requests per minute, but I'm usually not be able to get the loop to run beyond 30-34.
As I wasn't sure the 300 requests are readily available I went to https://console.cloud.google.com, created a new test project (not sure why that is needed) and explicitly activated the googlesheets API and created some credentials. I created an API key as well as an OAuth 2.0 Client ID (although I am admittedly somewhat lost what this does and how to use it).
I next tried to login with my api key by running
drive_deauth()
drive_auth_configure(api_key = "thisismyapikey")
gs4_auth(token = drive_api_key())
but that also didn't get me beyond 33ish. I also had a look into the google console, but also couldn't see any traffic - so not sure my API key got actually used?
I assume this is due to my inability to actually use the API in the intended way. Any help in setting this up / increasing the quota would be much appreciated. If that helps I'm happy to give access to the test account - simply write me a message.
With some kind help from very friendly people I think I mostly figured this out and it was indeed my failure to use the API correctly.
Why my approach failed
when you use googlesheets4 and any of its function out of the box, you get asked to authorize the tidyverse API OAuth app (you login with your Google credentials and give the OAuth app access rights). This means that you make all requests through the tidyverse OAuth app, as are all other users in the world who use this functionality. This is very nice as it works out of the box, but runs into limitations if other people are using the package at the same time. Sharing this quota with other people meant that I ran into limitations quite unpredictably.
How to change the setup to make it work
There are a couple of things that help to alleviate / solve the problem.
use the devtools version of googlesheets4 (devtools::install_github("tidyverse/googlesheets4"). This dev version of googlesheets4 in turn relies on the dev version of gargle, the package that manages the google authentification. The dev version of gargle has a retry function, that automatically retries your requests if they fail. This should solve the majority of issues.
Get your own OAuth app / google service account.
this allows you to manage the authentification process all on your own. You therefore don't have to share your quota with other users around the world.
To set up your own OAuth app / google service account, you can do the following (I'm focusing on the google service account here, as that is much easier in practice).
Log into https://console.cloud.google.com/. You will be asked to create a project. You can see your projects on the left next to "Google Cloud Platform".
Type "APIs and Services" into the search bar, press "enable APIs and services" and search for sheets. Enable this API.
Go back to the search bar and type in "Credentials"
Press "Create credentials" and select service account. A service account gives you programmatic access to the APIs. Give it a name and a description. You should be able to skip the optional parts. Create the service account and go back to the credentials overview. You may have to refresh the page or wait a minute.
Click on your service account (it looks like a very cryptic email address) and go to the "KEYS" tab.
Click "ADD KEY" and create a new key. As key type, select JSON.
Download that key and store it somewhere secure. This should be treated as a combination of password and username!
Now to actually use your key with googlesheets4, you can run `gs4_auth(path = "path-to-your-service-account.JSON")
In order to be able to access your google sheets, you need to grant your service account permissions. Go to your google sheet, press share (as you would do to share it with any other user) and type in this cryptic service account email (it should look something like "1234#something.iam.gserviceaccount.com". Everything should work now without you having to log in anywhere. If you have previously tried other things, I would suggest to restart your R session.
profit.
You should now also be able to track the API requests in the google console dashboard.
Note that there is still a limit of 60 requests per user per minute, so you're not getting your full 300 requests, but maybe it is possible to create several service accounts and balance the load between these. But not having other people's request interfere with yours is a big improvement!
Google says that it is a security measure. Try to share through adding their emails

OpenStack Cluster Event Notification

So far, based on my understanding of OpenStack Python SDK, I am able to read the Hypervisor, Servers instances, however, I do not see an API to receive and handle the change notification/events for the operations that happens on the cluster e.g. A new VM is added, an existing VM is deleted etc.
There is a similar old post (circa 2016) and I am curious if there have been any changes in Notification handling?
Notifications to external systems from openstack
I see a documentation, which talks about emitting notifications over a message bus that indicate different events that occur within the service.
https://docs.openstack.org/ironic/latest/admin/notifications.html
I have the following questions:
Does Openstack Python SDK support notification APIs?
How do I receive/monitor notifications for VM related changes?
How do I receive/monitor notifications for compute/hypervisor related changes?
How do I receive/monitor notifications for Virtual Switch related changes?
I see other posts such as Notifications in openstack and they recommend to use Ceilometer project, which uses a Database. Is there more light-weight solution than using a completely different service like Ceilometer?
Thanks in advance for your help in this regard.
As far as I see and I know, Openstack SDK doesn't provide such a function.
Ceilometer will also not help you. It only collects data by polling and by notifications over RPC. You would still have to poll the data from ceilometer by yourself. Beside this, ceilometer alone has the problem, that it only grow and will blow up your database, that's why you should also use gnocchi, when you use ceilometer.
At the moment I see only the 3 possible solutions for you:
Write your own tool, which runs permanently in the background and collect the data in a regular interval over OpenstackSDK and REST-API requests.
Write something, which does the same like ceilometer by reciving notifications over oslo-messaging (RPC). See the oslo_messaging_notifications-section in the configs: https://docs.openstack.org/ocata/config-reference/compute/config-options.html#id35 (neutron has also such an option) and use messagingv2 as driver like ceilometer does. But be aware here, that not every event creates a notification. The list of the ceilometer meter-data should give a good overview of which event are creating a notification and what can only be collected by polling: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html. The number of notification-events is really low, so its possible, that it doesn't provides all events you want.
Use in the oslo_messaging_notifications-section in the configs log as driver to write the notification in a log-file, and write a simple program to read the log-file and process or forward the read content. Here is the same problem like in number 2, that not every event creates a notification (log-entry in this case here). This has also the problem, that the notifications and so also the event-logs, are created on the compute-nodes (as far as I know) and so you would have to watch all compute-nodes by your tool.
Based on the fact, that I don't know, how much work it would be to write a tool to collect notifications over RPC and because I don't know, if all events you want to watch really creates a notification (base on the overview here: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html), I would prefer number 1.
Its the easiest way to create a tool, which runs GET-Requests over REST-API in a regular interval and forward the results to the desired destination as your own custom notifications.
I followed the below references to get this working. Also, chatted with the author of this code and video.
https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py
https://www.youtube.com/watch?v=WFq5JWXa9AM
In addition, I faced other issues:
By default, OpenStack server would not allow you to connect to RabbitMQ bus from remote host because of an IPTABLE rule. You will have to enable access to the RabbitMQ Port in the IP table.

Google Scope Authorizations Loop Endlessly When Previewing or Publishing Apps with Cloud SQL Database

My organization set up Cloud SQL as the default for Google App Maker about one month ago. In the last week, we have been unable to preview or publish apps that use Cloud SQL data sources, including the sample applications which worked perfectly before. The failure occurs during the authorization process. When previewing or publishing an app, Google App Maker displays a dialog stating "Deploying this app requires authorization". Next it prompts the user for their Google account and then requests approval for the necessary authorizations (e.g., "Manage the data in your Google SQL Service instances"). After approving the authorization, the prompts to authorize begin over with the dialog stating "Deploying this app requires authorization".
Observations:
We have repeated this problem on multiple different computers, networks, and four different user accounts.
In the SQL cloud console, our Cloud SQL instance shows new databases being created for each app along with new database-specific user accounts
All of the databases appear as expected when I log directly into the Cloud SQL database using phpMyAdmin
Other apps which don't use a Cloud SQL datasource work fine, including an app that uses a calculated data source which is hosted in the same Cloud SQL instance
The only errors in the Stack driver logs for the Cloud SQL database showed "INFO" level communication errors with the database (aborted connection...Got an error reading communication packets)
I'm unable to find Stack driver logs for the apps because I cannot preview or publish them (either option would provide a link to the Stack driver logs)
There are now approximately 20 databases in our SQL instance (mostly associated with simple app tests) and we have only used 1 GB of 10 GB of space in our SQL instance
I haven't seen any related problems on the Google Issue Tracker for Google App Maker
I'd appreciate any help or suggestions on what to check in order to resolve this issue.
I posted an issue to Google Issue Tracker and Google corrected the problem. They also provided a workaround if this problem happens again.
Here is the response from the Google development team posted on Google Issue Tracker: https://issuetracker.google.com/issues/145345198
It's great to hear your up and working again! We are aware of this issue and are working through a longer term fix. The specific bug appears to be related to some changes made in the Google Cloud session policy control that may have rolled out to your domain recently interacting with AppMaker in a way that was not expected. We've spent time diagnosing the underlying issue and we beleive we know the root cause. I suspect your domain admin did a version of the workaround below.
Without getting too far into the details, the specific bug is that for a Deployer of an AppMaker application, if the Google Cloud Session policy is set with any expiration time, the returned token AppMaker sees is invalid, triggering a loop in AppMaker trying to generate a valid security token. Historically, these session tokens never expired but recently there was beta feature launch that allowed domain admins to set them to expire. We strongly suspect your domain recently set this expiration policy explicitly and that's what is causing the bug.
The good news is that these policies are overridable per Organizational Unit and we have tested that OUs which have the original classic Never Expire setting do, in fact, allow AppMaker to work.
My suspicion is that your domain admin has reverted recent, local changes to your organizational policy under the admin.google.com console, specifically under Security > Google Cloud session control (Beta).
If this happens again, here the workaround we would recommend. Note you don't need to do this if you're currently up and working. You will need the help of someone with admin.gogole.com powers, specifically User and Organizational Unit powers at your organization. It is a slight increase in security risk but it restores some classic behavior that was standard until recently.
The summary of the workaround is to override the Google Cloud session control expiration setting such that individuals who need access to AppMaker deployments can have it. To mitigate systemic security risk, this is best done by creating a limited purpose Organizational Unit with just that setting different than the parent OU settings.
The workaround is to:
Contact someone in your domain with Admin powers for your Google for Business license.
Have your admin proceed to https://admin.google.com. The actions below need to be performed by a domain admin.
Under the Users section, identify the specific user account that needs the ability to deploy AppMaker Apps.
Identify the Organizational Unit of that Appmaker dev user and make a note of it.
Under the Organization Units settings, locate the Organization Unit you identified above.
Create a new Organization Unit underneath that user's current Organizational Unit with some descriptive identifying it as special w.r.t AppMaker. So for Developers, make something like DevelopersWhoAreAlsoAppMakerDevs.
Back under the Users tab, locate the user from step 3. Move this user into the new Organizational Unit you've just created. This change can take a while to propagate.
-Interlude- At this point, you've made a new Organizational Unit for just that individual and added them to it. You can certainly add multiple people to that OU, especially if they're already in the same parent OU. Use your discretion as to what amount of Organizational rework you wish to pursue. You may not be using OUs at all or you may decide to just turn off this control for the whole domain. It's up to you.
Under admin.google.com's Security settings, locate the Google Cloud session control (beta) settings.
Under this panel, from the dropdown menu on the left, locate the Organization Unit you just created.
Be sure to select ONLY the OU you intend to change.
Change the "Google Cloud Console and Google Cloud SDK session control" from expiring to "Session Never Expires".
Save your changes.
The account you selected in step 3 should now be able to deploy AppMaker apps.
It appears this OU change is only necessary for the deployer of an AppMaker app, not an individual user. Note also that if you have multiple AppMaker developers who all have different current OU settings, you may need to create multiple daughter OUs to avoid a sudden radical shift in OU settings for an individual account.

Need clarity on how pricing works on Smooch Sandbox

I have created a free trial account on smooch that gives me access to their WeChat API in the sandbox. However, it is still not clear to me what happens after the 14 day trial period is over. Will I be forced to upgrade to a paid plan even for using the sandbox or is the sandbox always free. Note: My intended use is to develop and test our app against their API and hence I will always need API access in their sandbox. Our production usage will be on a different account with the relevant paid plan. Will really appreciate someone sharing their knowledge around how it works.
The trial period applies only to usage of the Smooch Public REST API and Webhooks, so if you intend to receive and reply to messages programmatically, then the trial will not be sufficient for testing or development beyond the 14 day period. If you intend to receive and respond to messages using one of the built-in Business System integrations like Slack or Zendesk, you can continue to use those integrations after the trial period expires.
If you have a paid account already, you could also create a separate Smooch App for testing under that same account and take advantage of the API access afforded by your plan.

Resources