Add a CloudRun user cluser to a existing Anthos cluser for apigee to have one admin cluster for cloudRun and APIGEE - apigee

Can I use my existing Anthos cluster for APIGEE HYBRID to add a new user cluster for cloud run ?
to have one admin cluster and two user cluster : APIGEE and CloudRun.
thanks
to have one admin cluster and two user cluster : APIGEE and CloudRun.

Related

DynamoDb - Gateway VPC endpoint "across two accounts"

Infrastructure description: I have a dynamo db table in one AWS account (Say A1) and an application hosted in EC2 in another account (say A2) /VPC-private subnet. This app (in account A2) reads/writes that dynamo db table in account A1. Both accounts are under same organization and the table and app are in same AWS region. I created a VPC endpoint (say VPC-E1) for the dynamo db in the application's account (A2) and the route table is correctly populated with the VPC endpoint targets. The app authorizes itself using AssumeRole method. I created an role policy to the same IAM account that the EC2 uses to allow connecting to the DynamoDB only if the source VPC endpoint is the one I created (VPC-E1). NOTE: the EC2 has internet connectivity via NAT gateway.
IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "dynamodb:*",
"Effect": "Deny",
"Resource": [
"My DynamoDb table ARN"
],
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "VPC-E1"
}
}
}
]
}
The Reachability analyzer says that the traffic from Ec2 to the VPC endpoint will work fine.
The problem: The traffic is denied by the policy because either 1. the traffic is not taking place via that VPC endpoint or I assume that the traffic is taking place via Nat gateway/internet. When I remove this policy it works fine; because the traffic might take place via NAT gateway.
Have anyone configured such setup successfully? i.e. Accessing DynamoDb across accounts via AWS private network (VPC endpoint). My aim is to send the traffic via AWS private network from one account/VPC to another account that the dyanamo db table belongs to.
Yes. DynamoDb is a SaaS and is not hosted inside your VPC. I removed the condition in the IAM policy. I created a cloudtrail logs at account (A2 - where dynamodb belongs to) to capture the data events from specific dynamodb table . The VPC endpoint created in the consumer account (A2) appears in the cloudtrail logs (Data events from specific DynamoDb table/index) in the target AWS account (A1). Hence this works.

Assign GCP functions service account roles to engage with Firebase using Terraform

I want to use the Firebase Admin SDK in my GCP cloud function, specifically for creating custom auth tokens.
I was getting auth/insufficient-permission errors after deployment and got to this thread. Note that it talks about Firebase functions, while I use pure GCP Cloud Functions.
To my understanding, GCP Cloud Functions uses the default App Engine service account, which is missing the Firebase Admin SDK admin service agent role.
I manually added it through the GCP console and it seems to solve the issue, but now I want to automate it via terraform where I manage my infrastructure.
How do I access the default App Engine service account? I think it's auto created when the GCP project is created.
How do I add the relevant role to it without changing other service accounts using that roles?
Is this it right approach, or is there a better way I'm missing?
The relevant documentation I was looking at is here. Note that I'm using initializeApp() without arguments, i.e. letting the library to discover the service account implicitly.
How to get the default App Engine service account through Terraform: google_app_engine_default_service_account
How to work with 'additional' IAM roles assigned to a service account:
IAM policy for service account
For general recommendations - I would prefer to use a specifically created service account and completely delete (or disable) the default App Engine service account.
Edit ==> Additional details as requested
Here is a description of Cloud Function service account in runtime:
The App Engine service account has the Editor role, which allows it broad access to many Google Cloud services. While this is the fastest way to develop functions, Google recommends using this default service account for testing and development only. For production, you should grant the service account only the minimum set of permissions required to achieve its goal.
Thus, it may be useful to delete/disable App Engine service account, create a specific service account for the given cloud function, assign it all relevant minimum of IAM roles, and use it.
As a side note I also would suggest to delete/disable the default Compute Engine service account, delete the default network with all firewall rules and subnetworks... But this is a separate story.

How do I restrict access to Google Compute Engine to only my Firebase cloud functions

Is it possible to secure my service on Compute engine so that only my Firebase functions can access it using vpc / firewall rules?
Rather than using VPC/Firewalls to secure your GCE instance, you could use Identity-Aware Proxy, and have the function authenticate as a service account using the default service account for Cloud Functions (project_id#appspot.gserviceaccount.com). This is very robust against network changes, and is very flexible..

cloud alternative for wso2 Pre-Packaged Identity Server and API+Manager

I am now hosting Pre-Packaged+Identity+Server+5.2.0+with+API+Manager+2.0.0 [https://docs.wso2.com/display/CLUSTER44x/Configuring+the+Pre-Packaged+Identity+Server+5.2.0+with+API+Manager+2.0.0] in my own AWS instance.
Planning to move on to managed Cloud solution by WSO2. But I can see independent installatiion of identity server and wso2 api manager. But is there a cloud alternative for idenitity server , api manager combo.
I am using WSO2 idenity server for user management only.keeping users in that. Can it be done in API manager as well?
What is the cloud alternative for this?
WSO2 Cloud uses Identity Server for providing Single Sign On. Cloud has its deployment architecture done in a way API Manager can also do the user management (thats comes with the power of WSO2 platform). You dont need to worry about cloud having the API Manager and Identity Server separately.
IF you are managing your subscribers and publishers, then its an out of the box scenario in the cloud. If you want to store end users of the APIs (i.e. if you are using the password grant type), then you can add a secondary userstore and store the end users in it.
I recommend you to raise these questions via the "Contact Support" option available in the Cloud UI.

How WSO2 API Manager distributed setup works

How the process of deployment of api into GW node happens after publishing API from Publisher node in WSO2 APIM in a distributed set up?
There is a section called <Environments> under <APIGateway> in api-manager.xml configuration file. It is where the Gateway Environment section of an API is populated from in Publisher webapp. When you select an environment from that and publish from the Publisher webapp, it will create the Synapse artifact related to API and push it to Gateway by doing an Admin service call. For that <ServerURL> is used. So you need to correctly define the URL <ServerURL> in Publisher node so that it points to the GW node.

Resources