Kubernetes Client API from Google Cloud Functions (Firebase) Token Refresh - firebase

I want to start Kubernetes jobs on a GKE cluster from a Google Cloud Function (Firebase)
I'm using the Kubernetes node client https://github.com/kubernetes-client/javascript
I've created a Kubernetes config file using `kubectl config view --flatten -o json'
and loaded it
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromString(config)
This works perfectly locally but the problem is when running on cloud functions the token can't be refreshed so calls fail after a while.
My config k8s config files contains
"user": {
"auth-provider": {
"name": "gcp",
"config": {
"access-token": "redacted-secret-token",
"cmd-args": "config config-helper --format=json",
"cmd-path": "/usr/lib/google-cloud-sdk/bin/gcloud",
"expiry": "2022-10-20T16:25:25Z",
"expiry-key": "{.credential.token_expiry}",
"token-key": "{.credential.access_token}"
}
}
I'm guessing the command path points to the gcloud sdk which is used to get a new token when the current one expires. This works locally but on cloud functions it doesn't as there is no /usr/lib/google-cloud-sdk/bin/gcloud
Is there a better way to authenticate or a way to access the gcloud binary from cloud functions?

I have a similar mechanism (using Cloud Functions to authenticate to Kubernetes Engine) albeit written in Go.
This approach uses Google's Kubernetes Engine API to get the cluster's credentials and construct the KUBECONFIG using the values returned. This is equivalent to:
gcloud container clusters get-credentials ...
APIs Explorer has a Node.js example for the above method. The example uses Google's API Client Library for Node.JS for Kubernetes Engine also see here.
There's also a Google Cloud Client Library for Node.js for Kubernetes Engine and this includes getCluster which (I assume) is equivalent. Confusingly there's getServerConfig too and it's unclear from reading the API docs as to the difference between these methods.
Here's a link to the gist containing my Go code. It constructs a Kubernetes Config object that can then be used by the Kubernetes API to authenticate you to a cluster..

Related

VPC creation problem in aws via terraform

I have been trying to create vpc infrastructure in AWS through terraform I am unable to perform the "Terraform apply" command anyone has a similar problem while using a free trial account.
Error: Error creating VPC: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 4HZVo3-eWCS-YLhRy55P_0T13F_fPtA29TYrJrSe5_dyPxIcqRbh7_wCcrCZr2cpmb-B5--_fxVaOngBfHD_7yfnPH7NLf1rrqpb7ge1mvQrK8P0Ltfpgpm37nZXezZUoYf1t4peB25aCxnbfeboHpgJjcFnHvqvf5so5G2PufnGZSB4FUZMfdaqppnJ-sNT7b36TonHUDNbLhBVUl5Fwd8d02R-6ZraRYvDx-o4lDfP9xSWs6PMUFXNr1qzruYaeMYMxIe-9kGOQptgBLYZXsxr966ajor-p6aLJAKlIwPGN7Iz7v893oGpGgz_8wxTv4oEb5GnfYOuPOqSyEMLKI69b2JUvVU1m4tCcjKBaHJARP5sIiFSGhh4lb_E0_cKkmmFfKzyET2h8YkSD8U9Lm4rRtGbAEJvIoDZYDkNxlW7W2XvsccmLnQFeSxpLolVhguExkP7DT9uXffJzFEjQn-VkhqKnWlwv0vxIcOcoLP04Li5WAqRRr3l7yK2bYznfg
│ status code: 403, request id: 5c297a4d-7bcf-4bb4-b311-37480e1f26b8
make sure you have properly setup aws credentials and permissions.
check these two files
~/.aws/credentials
~/.aws/config
this docs can help you.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Did you configure your access keys?
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
There are multiple ways to do it (described here).
My above example can be a good start but you don't want to commit those so I recommend to configure the keys in ~/.aws/credentials (like you need them for AWS CLI). The aws provider will pick them up automatically and so you don't need to define them somewhere in your terraform code.

cloud functions python to access Datastore

I am looking for a tutorial or document on how to access datastore using cloud functions (python).
However, it seems there is only tutorial for nodejs.
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/functions/datastore
Can anybody help me out?
Thanks
There are no special setup needed to access datastore from cloud functions in python.
You just need to add google-cloud-datastore into requirements.txt and use datastore client as usual.
requirements.txt
# Function dependencies, for example:
# package>=version
google-cloud-datastore==1.8.0
main.py
from google.cloud import datastore
datastore_client = datastore.Client()
def foo(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values...
"""
query = datastore_client.query(kind=<KindName>)
data = query.fetch()
for e in data:
print(e)
Read more:
Python Client for Google Cloud Datastore
Setting Up Authentication for Server to Server Production Applications

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Can Firebase RemoteConfig be accessed from cloud functions

I'm using Firebase as a simple game-server and have some settings that are relevant for both client and backend and would like to keep them in RemoteConfig for consistency, but not sure if I can access it from my cloud functions in a simple way (I don't consider going through the REST interface a "simple" way)
As far as I can tell there is no mention of it in the docs, so I guess it's not possible, but does anyone know for sure?
firebaser here
There is a public REST API that allows you to read and set Firebase Remote Config conditions. This API requires that you have full administrative access to the Firebase project, so must only be used on a trusted environment (such as your development machine, a server you control or Cloud Functions).
There is no public API to get Firebase Remote Config settings from a client environment at the moment. Sorry I don't have better news.
This is probably only included in newer versions of firebase (8th or 9th and above if I'm not mistaken).
// We first need to import remoteConfig function.
import { remoteConfig } from firebase-admin
// Then in your cloud function we use it to fetch our remote config values.
const remoteConfigTemplate = await remoteConfig().getTemplate().catch(e => {
// Your error handling if fetching fails...
}
// Next it is just matter of extracting the values, which is kinda convoluted,
// let's say you want to extract `game_version` field from remote config:
const gameVersion = remoteConfigTemplate.parameters.game_version.defaultValue.value
So parameters are always followed by the name of the field that you defined in Firebase console's remote config, in this example game_version.
It's a mouthful (or typeful) but that's how you get it.
Also note that if value is stored as JSON string, you will need to parse it before usage, commonly: JSON.parse(gameVersion).
Similar process is outlined in Firebase docs.

Cloud Foundry - Installing Micro Bosh in a VM ( OpenStack )

I have followed the instructions on http://cloudfoundry.github.com/docs/running/deploying-cf/openstack/install_microbosh_openstack.html to install the micro bosh in a VM.
I'm a little confused about the micro_bosh.yml:
name: microbosh-openstack
env:
bosh:
password: $6$u/dxDdk4Z4Q3$MRHBPQRsU83i18FRB6CdLX0KdZtT2ZZV7BLXLFwa5tyVZbWp72v2wp.ytmY3KyBZzmdkPgx9D3j3oHaDZxe6F.
level: DEBUG
network:
name: default
type: dynamic
label: private
ip: 192.168.22.34
resources:
persistent_disk: 4096
cloud_properties:
instance_type: m1.small
cloud:
plugin: openstack
properties:
openstack:
auth_url: http://10.0.0.2:5000/v2.0/tokens
username: admin
api_key: f00bar
tenant: admin
default_key_name: admin-keypair
default_security_groups: ["default"]
private_key: /root/.ssh/admin-keypair.pem
what is the api_key used for? I don't comprehend the meaning of this key.
And the default key name?
Can someone please explain this configuration options better?
thanks
Bruno
EDIT
the answer to this question can be found here:
https://github.com/drnic/bosh-getting-started/blob/master/create-a-bosh/creating-a-micro-bosh-from-stemcell-openstack.md
http://10.0.0.2:5000/v2.0/tokens
Likely refers to the Keystone Service API.
This API authenticates you to OpenStack's keystone identity service. All REST API Services are catalogued there in the catalog service. Additionally all of OpenStack relies on keystone to authenticate all API queries.
Knowing nothing about bosh the attribute 'api_key' to me requires better context.
Generally OpenStack doesn't require an API Key in its own concept of API authentication.
More about openstack api authentication here:
http://docs.openstack.org/api/quick-start/content/index.html#Getting-Credentials-a00665
However there is a concept of an API Key in relation to EC2 keys. These can be generated with this query:
keystone ec2-credentials-create
My guess is that's what it requires there.
More alternatives there:
Credentials could be in in novarc file generated for your Openstack project with nova-manage project zipfile command. This is also available from the horizon interface.
Alternatively it could refer to a provider specific API Key such as rackspaces ( I doubt this ):
http://docs.rackspace.com/servers/api/v2/cs-devguide/content/curl_auth.html
'default_key_name' probably refers to the name of a keypair that has been previously registered with openstack. This would be a keypair that can be injected into an image at instance run time. It should correspond to the .pem filename. The key would need to be available to your user and your tenant that you choose in the config.
Check out a keypair creation / use example here:
http://docs.openstack.org/developer/nova/runnova/managing.instances.html
Best effort man. Hope that gives you what you need.

Resources