Component gateway enable issues with dataproc cluster thru terraform - google-cloud-endpoints

I spin up a dataproc cluster on gcp thru terraform, but I noticed that the component gateway still shows disabled though I had my scripts to enable it.
software_config {
optional_components = [ "ANACONDA", "JUPYTER" ]
image_version = "${var.cluster_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.logging.stackdriver.enable" = "true"
"dataproc:dataproc.enable_component_gateway" = "true"
}
}
While looking up the references, it seems this feature can not be enabled thru terraform and there were also mentions of using endpoint gateway as below,
endpoint_config {
enable_http_port_access = "true"
}
But when I tried using this, It error-ed out as "invalid or unknown key: endpoint_Config" .
Is there any other alternative to get this enabled thru terraform (note: i am using google-beta)
Thank you!

The issue here is that Terraform provider is out of sync with Dataproc API. If you would, please file a feature request on GitHub
For the time being, you may have to create the cluster with CG manually.

Related

Terraform kubernetes_pod resource is not created

I am very new to Terraform and I am given a project to make a basic Wordpress website in Google Cloud Platform using Kubernetes automated by Terraform. I am trying to set up the kubernetes pod for wordpress as follows:
resource "kubernetes_pod" "kubPod" {
metadata {
labels = {
app = "MyApp"
}
name = "terraform-example"
}
spec {
container {
image = "wordpress:4.8-apache"
name = "mywp"
}
}
}
When I build the project using terraform apply, I get the following output:
kubernetes_pod_v1.kubPod: Creating...
and after that terraform closes without creating the pod. I did not understand what is wrong with the code, I just suspect that it might be related to a lack in declarations, but examples I looked for were quite similar with my code below, I would be glad if I could get some help.
My version information is as follows:
Terraform v1.2.8
on windows_amd64
provider registry.terraform.io/hashicorp/google v4.34.0
provider registry.terraform.io/hashicorp/kubernetes v2.13.1
provider registry.terraform.io/hashicorp/null v3.1.1
Regards.

How to add HealthChecks for AzureKeyVault health status

I was trying to add HealthChecks for AzureKeyVault to my project and added following nuget package for that :
<PackageReference Include="AspNetCore.HealthChecks.AzureKeyVault" Version="6.0.2" />
And in code, added following :
var url = "https://123456.com";
builder.Services
.AddHealthChecks()
.AddAzureKeyVault(new Uri(url), keyVaultCredential,
options => { }, "AKV", HealthStatus.Unhealthy,
tags: new string[] { "azure", "keyvault", "key-vault", "azure-keyvault" });
But issue is, its showing healthy for each and every URL, just it should be proper URL.
and even in keyVaultCredential, if some random values are added, it showing status healthy.
Do some one know, how can use this HealthCheck
I have the same problem, I found we need to add at lease one key vault secret in the opts to make it work. e.g. options => { options.AddSecret("SQLServerConnection--connectionString");}
Please check if there are any restrictions in knowing the health
status of azure resources or with the use of this library in your
company VPN network .
Try the same in different network to check if the cause is network
issue or VPN
Try with debugging tools to capture the traffic to verify and see response.
References:
AzureKeyVault health check always returns "healthy"
(github.com)
AspNetCore.Diagnostics.HealthChecks

How to resolve 'error - InvalidDatasourceError: Datasource URL should use prisma'

So I am using Prisma as an ORM on my project to communicate with the database that I set up with AWS. Not happy with the AWS service I am now switching my database to railway.app - which is working out well for me. However, I set up a Prisma data proxy on my app with the AWS connection string, and now that I don't seem to want/ need it anymore I removed it but getting an error:
error - InvalidDatasourceError: Datasource URL should use Prisma:// protocol.
If you are not using the Data Proxy, remove the data proxy from the preview features in your
schema and ensure that PRISMA_CLIENT_ENGINE_TYPE environment variable is not set to data proxy.
Since getting the error I have removed previewFeatures = ["dataProxy"] from the prisma.schema file to make it look like this (back to what it was before configuring with dataproxy):
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url= env("DATABASE_URL")
}
but the error still persists, how do I fix this?
running prisma generate fixes this issue

Does Airflow v1.10.3 come with Flask App builder (FAB) by default and requires Google Oauth configs in webserver_config.py?

I had a question related to Airflow v1.10.3. We recently upgraded airflow from v1.9 to v1.10.3. With the new upgrade, we are experiencing a situation where any Celery execute commands coming in from the UI are not getting queued/executed in message broker and celery workers.
Based on Celery FAQ: https://docs.celeryproject.org/en/latest/faq.html#why-is-task-delay-apply-the-worker-just-hanging, it points to authentication issue, user not having the access.
We had web authentication (Google Oauth) in place in version v1.9 using following config:
[webserver]:
authenticate = True
auth_backend = airflow.contrib.auth.backends.google_auth
[google]:
client_id = <client id>
client_secret = <secret key>
oauth_callback_route = /oauth2callback
domain = <domain_name>.com
Will the above config values still work or do we need to set the RBAC=True and provide Google Oauth credentials in webserver_config.py?
Webserver_config.py
from flask_appbuilder.security.manager import AUTH_OAUTH
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Admin"
OAUTH_PROVIDERS = [{
'name':'google',
'whitelist': ['#yourdomain.com'], # optional
'token_key':'access_token',
'icon':'fa-google',
'remote_app': {
'base_url':'https://www.googleapis.com/oauth2/v2/',
'request_token_params':{
'scope': 'email profile'
},
'access_token_url':'https://oauth2.googleapis.com/token',
'authorize_url':'https://accounts.google.com/o/oauth2/auth',
'request_token_url': None,
'consumer_key': '<your_client_id>',
'consumer_secret': '<your_client_secret>',
}
}]
Any help is very much appreciated. Thanks.
From my experience, both will work. Of course, as they call the FAB-based UI, the "new UI", the old one will probably be killed off.
Your problem, though, doesn't sound like it has anything to do with user authentication, but celery access. It sounds like airflow and/or celery are not reading celery_result_backend or one of the other renamed options, when they should.
Search for Celery config in their UPDATING document for the full list.

Is it possible to setup a custom hostname for AWS Transfer SFTP via Terraform

I'm trying to set up an SFTP server with a custom hostname using AWS Transfer. I'm managing the resource using Terraform. I've currently got the resource up and running, and I've used Terraform to create a Route53 record to point to the SFTP server, but the custom hostname entry on the SFTP dashboard is reading as blank.
And of course, when I create the server manually throught the AWS console and associate a route53 record with it, it looks like what I would expect:
I've looked through the terraform resource documentation and I've tried to see how it might be done via aws cli or cloudformation, but I haven't had any luck.
My server resource looks like:
resource "aws_transfer_server" "sftp" {
identity_provider_type = "SERVICE_MANAGED"
logging_role = "${aws_iam_role.logging.arn}"
force_destroy = "false"
tags {
Name = ${local.product}-${terraform.workspace}"
}
}
and my Route53 record looks like:
resource "aws_route53_record" "dns_record_cname" {
zone_id = "${data.aws_route53_zone.sftp.zone_id}"
name = "${local.product}-${terraform.workspace}"
type = "CNAME"
records = ["${aws_transfer_server.sftp.endpoint}"]
ttl = "300"
}
Functionally, I can move forward with what I have, I can connect to the server with my DNS, but I'm trying to understand the complete picture.
In AWS,
When you create a server using AWS Cloud Development Kit (AWS CDK) or through the CLI, you must add a tag if you want that server to have a custom hostname. When you create a Transfer Family server by using the console, the tagging is done automatically.
So, you will need to be able to add those tags using Terraform. In v4.35.0 they added support for a new resource: aws_transfer_tag.
An example supplied in the GitHub Issue (I haven't tested it personally yet.):
resource "aws_transfer_server" "with_custom_domain" {
# config here
}
resource "aws_transfer_tag" "with_custom_domain_route53_zone_id" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:route53HostedZoneId"
value = "/hostedzone/ABCDE1111222233334444"
}
resource "aws_transfer_tag" "with_custom_domain_name" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:customHostname"
value = "abc.example.com"
}

Resources