Is there an API to limit the line of logs in stackdriver? - stackdriver

I need to get recent 100 lines from a stack driver log. Is there any API which can be used to implement this scenario in stackdriver? currently I'm using Google cloud Java client to implement the scenario.

You can use the CLI gcloud command and "limit" for this:
gcloud logging read "resource.type=gce_instance AND logName=projects/my-gcp-project-id/logs/syslog AND textPayload:SyncAddress" --limit 10 --format json
https://cloud.google.com/logging/docs/reference/tools/gcloud-logging#reading_log_entries

Related

Kubernetes Client API from Google Cloud Functions (Firebase) Token Refresh

I want to start Kubernetes jobs on a GKE cluster from a Google Cloud Function (Firebase)
I'm using the Kubernetes node client https://github.com/kubernetes-client/javascript
I've created a Kubernetes config file using `kubectl config view --flatten -o json'
and loaded it
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromString(config)
This works perfectly locally but the problem is when running on cloud functions the token can't be refreshed so calls fail after a while.
My config k8s config files contains
"user": {
"auth-provider": {
"name": "gcp",
"config": {
"access-token": "redacted-secret-token",
"cmd-args": "config config-helper --format=json",
"cmd-path": "/usr/lib/google-cloud-sdk/bin/gcloud",
"expiry": "2022-10-20T16:25:25Z",
"expiry-key": "{.credential.token_expiry}",
"token-key": "{.credential.access_token}"
}
}
I'm guessing the command path points to the gcloud sdk which is used to get a new token when the current one expires. This works locally but on cloud functions it doesn't as there is no /usr/lib/google-cloud-sdk/bin/gcloud
Is there a better way to authenticate or a way to access the gcloud binary from cloud functions?
I have a similar mechanism (using Cloud Functions to authenticate to Kubernetes Engine) albeit written in Go.
This approach uses Google's Kubernetes Engine API to get the cluster's credentials and construct the KUBECONFIG using the values returned. This is equivalent to:
gcloud container clusters get-credentials ...
APIs Explorer has a Node.js example for the above method. The example uses Google's API Client Library for Node.JS for Kubernetes Engine also see here.
There's also a Google Cloud Client Library for Node.js for Kubernetes Engine and this includes getCluster which (I assume) is equivalent. Confusingly there's getServerConfig too and it's unclear from reading the API docs as to the difference between these methods.
Here's a link to the gist containing my Go code. It constructs a Kubernetes Config object that can then be used by the Kubernetes API to authenticate you to a cluster..

How to get the size of a partition in Cosmos DB collection?

I am really surprised that I could not find this information.
I have seen a comment (without details) that there is a REST API to get this information.
I will try to find this information.
You can use azure cli to get the size of a mongo collection like this.
az cosmosdb mongodb collection show -g myRG -a mycosmosaccount -d mydatabase -n mycollection
I also think you can get this via REST API on Management Plane. API sample for that is here

Does Firebase Realtime Database REST API support multi path updates at different entity locations?

I am using the REST API of Firebase Realtime Database from an AppEngine Standard project with Java. I am able to successfully put data under different locations, however I don't know how I could ensure atomic updates to different paths.
To put some data separately at a specific location I am doing:
requestFactory.buildPutRequest("dbUrl/path1/17/", new ByteArrayContent("application/json", json1.getBytes())).execute();
requestFactory.buildPutRequest("dbUrl/path2/1733455/", new ByteArrayContent("application/json", json2.getBytes())).execute();
Now to ensure that when saving a /path1/17/ a /path2/1733455/ is also saved, I've been looking into multi path updates and batched updates (https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes, only available in Cloud Firestore?) However, I did not find whether this feature is available for the REST API of the Firebase Realtime Database as well or only through the Firebase Admin SDK.
The example here shows how to do a multi path update at two locations under the "users" node.
curl -X PATCH -d '{
"alanisawesome/nickname": "Alan The Machine",
"gracehopper/nickname": "Amazing Grace"
}' \
'https://docs-examples.firebaseio.com/rest/saving-data/users.json'
But I don't have a common upper node for path1 and path2.
Tried setting as the url as the database url without any nodes (https://db.firebaseio.com.json) and adding the nodes in the json object sent, but I get an error: nodename nor servname provided, or not known.
This would be possible with the Admin SDK I think, according to this blog post: https://firebase.googleblog.com/2015/09/introducing-multi-location-updates-and_86.html
Any ideas if these atomic writes can be achieved with the REST API?
Thank you!
If the updates are going to a single database, there is always a common path.
In your case you'll run the PATCH command against the root of the database:
curl -X PATCH -d '{
"path1/17": json1,
"path2/1733455": json2
}' 'https://yourdatabase.firebaseio.com/.json'
The key difference with your URL seems to be the / before .json. Without that you're trying to connect to a domain on the json TLD, which doesn't exist (yet) afaik.
Note that the documentation link you provide for Batched Updates is for Cloud Firestore, which is a completely separate database from the Firebase Realtime Database.

Running AWS commands from commandline on a ShellCommandActivity

My original problem was that I want to increase my DynamoDB write throughput before I run the pipeline, and then decrease it when I'm done uploading (doing it max once a day, so I'm fine with the decreasing limitations).
They only way I found to do it is through a shell script that will issue the API commands to alter the throughput. How does it work with my AMI access_key and secret_key when it's a resource that pipeline creates for me? (I can't log in to set the ~/.aws/config file and don't really want to create an AMI just for this).
Should I write the script in bash? can I use ruby/python AWS SDK packages for example? (I prefer the latter..)
How do I pass my credentials to the script? do I have runtime variables (like #startedDate) that I can pass as arguments to the activity with my key and secret? Do I have any other way to authenticate with either the commandline tools or the SDK package?
If there is another way to solve my original problem - please let me know. I've only got to the ShellActivity solution because I couldn't find anything else in documentations/forums.
Thanks!
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html
The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation.
The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those credentials so no need to update ~/.aws/config of pass credentials manually.

Authenticating Service Accounts on Google Compute Engine with BigQuery, via R Studio Server

I'm looking to call BigQuery from R Studio, installed on a Google Compute Engine.
I have the bq python tool installed on the instance, and I was hoping to use its service accounts and system() to get R to call bq command line tool and so get the data.
However, I run into authentication problems, where it asks for a browser key. I'm pretty sure there is no need to get the key due to the service account, but I don't know how to construct the authetication from with R (it runs on RStudio, so will have multiple users)
I can get an authetication token like this:
library(RCurl)
library(RJSONIO)
metadata <- getURL('http://metadata/computeMetadata/v1beta1/instance/service-accounts/default/token')
tokendata <- fromJSON(metadata)
tokendata$$access_token
But how do I then use this to generate a .bigqueryrc token? Its the lack of this that triggers the authetication attempt.
This works ok:
system('/usr/local/bin/bq')
showing me bq is installed ok.
But when I try something like:
system('/usr/local/bin/bq ls')
I get this:
Welcome to BigQuery! This script will walk you through the process of initializing your .bigqueryrc configuration file.
First, we need to set up your credentials if they do not already exist.
******************************************************************
** No OAuth2 credentials found, beginning authorization process **
******************************************************************
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=XXXXXXXX.apps.googleusercontent.com&access_type=offline
Enter verification code: You have encountered a bug in the BigQuery CLI. Google engineers monitor and answer questions on Stack Overflow, with the tag google-bigquery: http://stackoverflow.com/questions/ask?tags=google-bigquery
etc.
Edit:
I have managed to get bq functioning from RStudio system() commands, by skipping the authetication by logging in to the terminal as the user using RStudio, autheticating there by signing in via the browser, then logging back to RStudio and calling system("bq ls") etc..so this is enough to get me going :)
However, I would still prefer it if BQ can be autheticated within RStudio itself, as many users may log in and I'll need to autheticate via terminal for all of them. And from the service account documentation, and the fact I can get an authetication token, hints at this being easier.
For the time being, you need to run 'bq init' from the command line to set up your credentials prior to invoking bq from a script in GCE. However, the next release of bq will include support for GCE service accounts via a new --use_gce_service_account flag.

Resources