StackDriver Custom Metric Resource Type cloud_composer_environment - stackdriver

I have a problem when I create timeseries on StackDriver.
So basically I create time series by executing the API from this site https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/create
and for the resource type I set it as `cloud_composer_environment'.
and the JSON looks like
"resource": {
"type": "cloud_composer_environment",
"labels": {
"project_id": "MY PROJECT ID",
"environment_name": "MY ENVIRONTMENT",
"location": "us-central1"
}
},
and when I execute the API, the result was
{
"error": {
"code": 400,
"message": "One or more TimeSeries could not be written: Metrics cannot be written to cloud_composer_environment.: timeSeries[0]",
"status": "INVALID_ARGUMENT"
}
}
and I dont know How to fix it, there is no information why was it a bad request

Composer stackdriver metrics are not publicly writable today and we are currently working on exposing more workflow-related metrics. Meanwhile, you may want to create your own metrics for reporting and/or use composer stackdriver logs for monitoring/alerting as described here.

Related

Cloud tasks permission error when adding TASK_ID

I have been creating a task following the convention from the documentation - projects/PROJECT_ID/locations/LOCATION_ID/queues/QUEUE_ID which in my real example would look something like this - projects/staging/locations/us-central1/queues/members. That is all working fine, but i wanted to add the TASK_ID so i can enable the de-duplication feature and i used this projects/PROJECT_ID/locations/LOCATION_ID/queues/QUEUE_ID/tasks/TASK_ID which translates to something like this projects/staging/locations/us-central1/queues/members/tasks/testing-id. When i try to use the TASK_ID i get the following error code:
{
"message": "The principal (user or service account) lacks IAM permission \"cloudtasks.tasks.create\" for the resource \"projects\/staging\/locations\/us-central1\/queues\/members\/tasks\/testing-id\" (or the resource may not exist).",
"code": 7,
"status": "PERMISSION_DENIED",
"details": [
{
"#type": "grpc-server-stats-bin",
"data": "<Unknown Binary Data>"
}
]
}
Why is this error happening? Why should adding the TASK_ID change what permission do i need?

Can't create stack from template using Openstack Heat API

I'm trying to create a stack from a template using the Heat API. I'm using the API reference here as a guide. I have the following json in a file called single-server-template.json:
{
"stack_name": "api-test",
"template": {
"heat_template_version": "rocky",
"description": "Testing Heat API\n",
"resources": {
"server1": {
"type": "OS::Nova::Server",
"properties": {
"name": "Server1",
"image": "Ubuntu 22.04 (Jammy)",
"flavor": "alt.st1.small",
"key_name": "my_key",
"networks": "Internal"
}
}
}
}
}
I'm sending it with this: curl -X POST -H "X-Auth-Token:$OS_TOKEN" -d #single-server-template.json https://$OS_HOST_URL:8004/v1/$OS_PROJECT_ID/stacks
I've been at it for hours but no matter what I send I get a 400 error:
{"code": 400, "title": "Bad Request", "explanation": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "error": {"type": "HTTPBadRequest", "traceback": null, "message": "The server could not comply with the request since it is either malformed or otherwise incorrect."}}
Things I've tried:
confirmed my environment vars are filled in correctly before sending
confirmed that the template itself is valid by spinning it up from the UI console
confirmed that other endpoints that require auth work as expected
screaming and/or crying
Can anyone confirm that what I've got is correct, or test it against your own openstack instance? The only other thing I can think of is that maybe my openstack provider has an issue with their API

Fetch company posts from linkedin API

I am trying to fetch the posts of the company from the api, I have already applied to the marketing development platform and it was approved. I already got the token with the scope: r_organization_social and I'm calling the /shares api:
https://api.linkedin.com/v2/shares?q=owners&owners=urn:li:organization:{company_ID}&sharesPerOwner=100&count=25&sharesPerOwner=10
But I'm getting the following response:
{
"paging": {
"start": 0,
"count": 25,
"links": [
{
"type": "application/json",
"rel": "next",
"href": "/v2/shares?count=25&owners=urn%3Ali%3Aorganization%3A{company_ID}&q=owners&sharesPerOwner=10&sharesPerOwner=100&start=0"
}
],
"total": 242
},
"elements": []
}
I tried to change the query params and it's still the same
This end-point worked for me:
https://api.linkedin.com/v2/ugcPosts?q=authors&authors=List(urn%3Ali%3Aorganization%3A<ID_ORGANIZATION>)
See documentation: https://learn.microsoft.com/en-us/linkedin/marketing/integrations/community-management/shares/ugc-post-api?tabs=http#sample-request-6
Disclaimer: I've no access to the linkedin API and couldn't test. But these are some things I noticed:
Your url contains two times the paramater sharesPerOwner, try removing one.
In the docs it's recommended to set the sharesPerOwner to 1000 and the count to 50. I'd also include the start paramater, just to make sure:
Maybe try something like this:
GET https://api.linkedin.com/v2/shares?q=owners&owners=urn:li:organization:{id}&sharesPerOwner=1000&count=50&start=0
From the api-docs(https://learn.microsoft.com/en-us/linkedin/marketing/integrations/community-management/shares/share-api?tabs=http#find-shares-by-owner): "Note that the pagination excludes UGC and Direct Sponsored Content (DSC) posts". Make sure that the owner you are testing contains posts.
If this doesn't work. Could you provide some information on how you are sending the request? Have you tried accessing other parts of the api?

GA: How to access Cohort Analysis via Analytics API?

Cohort dimensions and metrics are listed here. Yet, when I try to query it using API (e.g. using Query Explorer) an error 400 occurs.
One of queries I've tried is: metrics = ga:cohortActiveUsers and dimensions = ga:cohortNthDay .
Is is possible to query Cohort Analysis report via API?
The problem you are having is because the Query explorer uses the v3 of the Google Analytics API. If you look at the Dimensions and Metrics Explorer you will notice that these dimensions were added in the Analytics Reporting API V4.
The error message you are getting is incorrect and should be corrected soon. It should state something more like This metric cannot be used in Version 3 of the API. You caught this while we where in the process of rolling out the new API. Which has now been officially released see change log
To make use of these new dimensions and metrics you must construct a V4 cohort request:
POST https://analyticsreporting.googleapis.com/v4/reports:batchGet
{
"reportRequests": [{
"viewId": "XXXX",
# No date range is required in the request
"dimensions": [{"name": "ga:cohort" },{"name": "ga:cohortNthDay" }],
"metrics": [
{"expression": "ga:cohortActiveUsers" },
{"expression": "ga:cohortTotalUsers"}
],
"cohortGroup": {
"cohorts": [{
"name": "cohort 1",
"type": "FIRST_VISIT_DATE",
"dateRange": { "startDate": "2015-08-01", "endDate": "2015-08-01"}
},{
"name": "cohort 2",
"type": "FIRST_VISIT_DATE",
"dateRange": {"startDate": "2015-07-01", "endDate": "2015-07-01"}
}]
}
}]
}
It is possible to compose a cohort requests by using the Request Composer tool, in the Cohort Request tab.
As you select the options in the Set query parameters section, the request payload is shown below.
Hope it helps.

google prediction api "hello prediction" tutorial

I have run through the Google Prediction API tutorials and documentation for "hello prediction - https://cloud.google.com/prediction/docs/hello_world
However when training the model in the developer console my request fails with the following output:
Request:
POST https://www.googleapis.com/prediction/v1.6/projects/959568262740/trainedmodels?key={YOUR_API_KEY}
{
"id": "language_id",
"storageDataLocation": "http://storage.googleapis.com/2341234/language_id.txt"
}
Response:
400 OK
- Show headers -
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Training data file is empty.",
"locationType": "other",
"location": "id"
}
],
"code": 400,
"message": "Training data file is empty."
}
}
I've implemented Authorize requests using OAuth 2.0: - is there anything else that I should be doing, or that may have changed between Google PRediction API v 1.6 and the tutorial. Any link to any additional tutorial or article on the subject would also be extremely valuable so can debug myself!
Your storageDataLocation is not correct.
When you go to the Overview tab in Google Developers Console, you'll find your Project ID: xxxxx (example) and the bucket you've stored in is called yyyy (example).
Then replace the "storageDataLocation:" "xxxxx/yyyy". Should solve this problem. You need the relative path to Google Storage, and not the absolute web path.

Resources