i get an error saying that nginx is unable to get my secret key noting that it does exists when i checked it in gcp
when i chekced the logs of nginx-ingress-controller , it gives me this error : Error getting SSL certificate "default/my-certs": local SSL certificate default/my-certs was not found. Using default certificate
module "nginx-controller" {
source = "terraform-iaac/nginx-controller/helm"
namespace = "default"
ip_address = data.google_compute_address.ingress_ip_address.address
depends_on=[kubernetes_secret.store_ssl_private_key]
}
service
resource "kubernetes_service_v1" "exposing_app" {
metadata {
name = "service${var.app}"
}
spec {
selector = {
app = var.app
}
port {
port = 80
target_port = 8080
protocol = "TCP"
name = "grpc-server"
}
}
}
creating secret
resource "kubernetes_secret" "store_ssl_private_key" {
metadata {
name = "my-certs"
}
data = {
"tls.crt" = var.CRT
"tls.key" = var.PRIV_KEY_SSL
"ca.crt" = var.CA
}
type = "kubernetes.io/tls"
}
ingress :
resource "kubernetes_ingress_v1" "exposing_app" {
metadata {
name = "exposingapp"
annotations = {
"kubernetes.io/ingress.class"= "nginx"
#"nginx.ingress.kubernetes.io/ssl-redirect"= "false"
#"nginx.ingress.kubernetes.io/ssl-redirect" = "true"
"nginx.org/grpc-services"= "service${var.app} grpc-server"
"nginx.ingress.kubernetes.io/backend-protocol"="GRPC"
"nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream"= "true"
"nginx.ingress.kubernetes.io/auth-tls-secret"= "default/${kubernetes_secret.store_ssl_private_key.metadata.0.name}"
}
}
spec {
rule {
host = "${var.ENV == "staging" ? var.website_staging:var.website_production}"
http {
path {
backend {
service {
name = kubernetes_service_v1.exposing_app.metadata.0.name
port {
number = 80
}
}
}
path = "/*"
}
}
}
tls {
hosts = ["${var.ENV == "staging" ? var.website_staging:var.website_production}"]
secret_name = kubernetes_secret.store_ssl_private_key.metadata.0.name
}
}
depends_on = [
kubernetes_secret.store_ssl_private_key
]
}
At a first glance, it appears to potentially be related to the way you create the TLS secret in Terraform. In the kubernetes_secret.store_ssl_private_key resource you are setting the various data attributes to Terraform variables. Are you providing those as file() input or simply strings containing the path to the certificate files you have locally?
In order to successfully generate a certificate via Terraform and ensure that it contains the right data, you would have to declare a ca.crt secret attribute of type file, as you would create it via the CLI as indicated here
You could try to decode the base64 value of your secret to ensure that it's properly created. I also found this post that might be helpful in detailing how to create a TLS secret via Terraform.
EDIT1
Another thing that is specified in the official docs for using client certificates is that when they create the secrets they are of type generic and not tls. Could you maybe try and provision a new secret using the commands indicated in the official example? Make sure to also provide the full CA certificate chain for the ca.crt key.
Related
How do I configure App Insights instrumentation for an app service via Terraform? Is it all via app_settings, or is there a resource I am missing?
I create app insights resource:
resource "azurerm_application_insights" "app1" {
for_each = local.all_envs
application_type = "web"
location = azurerm_resource_group.rg-webapps.location
name = "appi-app1-${each.value}"
resource_group_name = azurerm_resource_group.rg-webapps.name
retention_in_days = 30
sampling_percentage = 0
workspace_id = azurerm_log_analytics_workspace.log-analytics-workspace[each.value].id
}
I tie it to my app service:
resource "azurerm_windows_web_app" "app1" {
name = "app1"
location = azurerm_resource_group.rg-webapps.location
resource_group_name = azurerm_resource_group.rg-webapps.name
...
app_settings = {
APPLICATIONINSIGHTS_ROLE_NAME = "role1"
APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights.app1["dev"].instrumentation_key
APPLICATIONINSIGHTS_CONNECTION_STRING = azurerm_application_insights.app1["dev"].connection_string
}
...
}
But it says application insights is not fully enabled:
Is instrumentation controlled by these config keys, which I have to manually set?
Tried to check with appsettings for instrumentation key and connection string in my case and it was not enabled in portal.
app_settings = {
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.<app>.instrumentation_key
"APPLICATIONINSIGHTS_CONNECTION_STRING" = azurerm_application_insights.<app>.connection_string
}
Also include ApplicationInsightsAgent_EXTENSION_VERSION in the app settings .
app_settings = {
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.<app>.instrumentation_key
"APPLICATIONINSIGHTS_CONNECTION_STRING" = azurerm_application_insights.<app>.connection_string
"APPINSIGHTS_PORTALINFO" = "ASP.NET"
"APPINSIGHTS_PROFILERFEATURE_VERSION" = "1.0.0"
"ApplicationInsightsAgent_EXTENSION_VERSION" = "~2"
}
For working properly, your app may require additional settings from below: check what works for your app.
"APPINSIGHTS_INSTRUMENTATIONKEY"
"APPINSIGHTS_PROFILERFEATURE_VERSION"
"APPINSIGHTS_SNAPSHOTFEATURE_VERSION"
"APPLICATIONINSIGHTS_CONNECTION_STRING"
"ApplicationInsightsAgent_EXTENSION_VERSION"
"DiagnosticServices_EXTENSION_VERSION"
"InstrumentationEngine_EXTENSION_VERSION"
"SnapshotDebugger_EXTENSION_VERSION"
"XDT_MicrosoftApplicationInsights_BaseExtensions"
"XDT_MicrosoftApplicationInsights_Mode"
And try to set a tag on the azurerm_application_insights as said by nancy in SO reference
resource "azurerm_application_insights" "webapp-ka-repo" {
...
tags {
"hidden-link:/subscriptions/<subscription id>/resourceGroups/<rg name>/providers/Microsoft.Web/sites/<site name>": "Resource"
}
}
or
tags = {
"hidden-link:/subscriptions/${data.azurerm_subscription.current.subscription_id}/resourceGroups/${azurerm_resource_group.example.name}/providers/Microsoft.Web/sites/<sitename>” = "Resource"
}
and check if it is enabled.
In Corda we are using CordaRPCClient to initiate transaction from the client. Here we are passing username and password to start the connection. Right now I am using hardcoded user name and password. Can I map this to a user table which is there in DB. Please share if there any best practices exists.
Yes you can definitely have rpc users fetched from a database. All you would need is some configuration in the nodes configuration file (node.conf).
The users are generally defined in the security block. Below is how it can be configured.
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "<jdbc connection string>"
username = "<db username>"
password = "<db user password>"
driverClassName = "<JDBC driver>"
}
}
options = {
cache = {
expireAfterSecs = 120
maxEntries = 10000
}
}
}
You could find more details in our documentation here.
I'm trying to build an istio out of process mixer adapter.
Everything worked fine after few attempts and I now want to be able to configure my adapter using the "session based" adapter model.
If I understood the concept correctly, I only need to :
Create a config.proto with my parameters :
syntax = "proto3";
package config;
import "gogoproto/gogo.proto";
option (gogoproto.goproto_getters_all) = false;
option (gogoproto.equal_all) = false;
option (gogoproto.gostring_all) = false;
message Params {
string value1 = 1;
string value2 = 2;
}
compile it with protoc to output a descriptor_set file
base64 this descriptor set in adapter.yml
apiVersion: "config.istio.io/v1alpha2"
kind: adapter
metadata:
name: exampleadapter
namespace: istio-system
spec:
description: "A sample adapter for test purposes"
session_based: true
templates:
- ...
config: ...config.proto descriptor set base64 encoded ...
implement this gRPC protobuf interface :
syntax = "proto3";
package istio.mixer.adapter.model.v1beta1;
option go_package="istio.io/api/mixer/adapter/model/v1beta1";
option cc_generic_services = true;
import "google/protobuf/any.proto";
import "google/rpc/status.proto";
service InfrastructureBackend {
rpc Validate(ValidateRequest) returns (ValidateResponse);
rpc CreateSession(CreateSessionRequest) returns (CreateSessionResponse);
rpc CloseSession(CloseSessionRequest) returns (CloseSessionResponse);
}
message CreateSessionRequest {
google.protobuf.Any adapter_config = 1;
map<string, google.protobuf.Any> inferred_types = 2;
}
message CreateSessionResponse {
string session_id = 1;
google.rpc.Status status = 2;
}
message ValidateRequest {
google.protobuf.Any adapter_config = 1;
map<string, google.protobuf.Any> inferred_types = 2;
}
message ValidateResponse {
google.rpc.Status status = 1;
}
message CloseSessionRequest {
string session_id = 1;
}
message CloseSessionResponse {
google.rpc.Status status = 1;
}
Create a Handler istio config file that set the config values :
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: h1
namespace: istio-system
spec:
adapter: exampleadapter
connection:
address: exampleadapter:9070
params:
value1: testVal111
value2: testVal2222
So I've done all this steps, but my Validate/CreateSession methods are never called.
I tried to check mixer (istio-policy) logs, even in debug, but I don't have any clues why this config step is totally ignored !
Would someone have any ideas ? Something obvious I missed ?
Thanks in advance !
I got an answer on the Istio GitHub Issue tracker : https://github.com/istio/istio/issues/19194#issuecomment-558922143
TL;DR : It's not implemented yet and there is no plan to release this feature in foreseeable future.
I use Terraform trying to setup my infrastructure on OVH.
From docs I see that I can connect compute instance to network either by name:
resource "openstack_compute_instance_v2" "front" {
network {
name = "Ext-Net"
}
}
or by port (then you need to create port entity):
data "openstack_networking_network_v2" "ext_net" {
name = "Ext-Net"
}
resource "openstack_networking_port_v2" "public_port" {
name = "public_port"
network_id = "${data.openstack_networking_network_v2.ext_net.id}"
admin_state_up = "true"
}
resource "openstack_compute_instance_v2" "front" {
network {
port = "${openstack_networking_port_v2. public_port.id}"
}
}
There is also a 3rd option (connect by network's uuid, but it is quite similar to network name).
In which case I should use port instead of network name?
Also when I connect both interfaces by name and ssh to freshly booted compute instance, I can see that ipv4 for internal network is not set as expected. Is it ok for OVH and I should set 10.0.0.1 manually with some kind of provisioning script?
network = [
{
name = "Ext-Net"
},
{
name = "internal"
fixed_ip_v4 = "10.0.0.1"
}
]
I am building a Web Api (using ASP.NET Web API), that connects via Secure WebSockets to an endpoint that our client exposed (wss://client-domain:4747/app/engineData). They gave me their certificates all in .pem format (root.pem and client.pem), and a private key (client_key.pem).
In order to get this done I did the following:
1) Converted client.pem and client_key.pem to a single .pfx file (used this here: Convert a CERT/PEM certificate to a PFX certificate)
2) I used the library System.Net.WebSockets, and wrote the following code:
private void InitWebSockesClient()
{
client = new ClientWebSocket();
client.Options.SetRequestHeader(HEADER_KEY, HEADER_VALUE); //Some headers I need
AddCertificatesSecurity();
}
private void AddCertificatesSecurity()
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls
| SecurityProtocolType.Tls11
| SecurityProtocolType.Tls12;
// I KNOW THIS SHOULDNT BE USED ON PROD, had to use it to make it
// work locally.
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
X509Certificate2 x509 = new X509Certificate2();
// this is the pfx I converted from client.pem and client_key
byte[] rawData = ReadFile(certificatesPath + #"\cert.pfx");
x509.Import(rawData, "123456", X509KeyStorageFlags.UserKeySet);
X509Certificate2Collection certificateCollection = new X509Certificate2Collection(x509);
client.Options.ClientCertificates = certificateCollection;
}
And when I want to connect I call:
public async Task<bool> Connect()
{
Uri uriToConnect = new Uri(URL);
await client.ConnectAsync(uriToConnect, CancellationToken.None);
return client.State == WebSocketState.Open;
}
This works fine locally. But whenever I deploy my Web Api on Azure (App Service) and make an HTTP request to it, it throws:
System.Net.WebSockets.WebSocketException - Unable to connect to the remote server.
And the inner exception:
System.Net.WebException - The request was aborted: Could not create SSL/TLS secure channel.
I enabled WebSockets on the AppService instance.
If I delete the line that always return true for the certificate validation, it doesn't work even locally, and the message says something like:
The remote certificate is invalid according to the validation procedure.
So definitely I got something wrong with the certificates, those three .pem files are being used right now in a similar [![enter image description here][1]][1]app in a node.js and work fine, the WSS connection is established properly. I don't really know what usage give to each one, so I am kind of lost here.
These are the cipher suites of the domain I want to connect: https://i.stack.imgur.com/ZFbo3.png
Inspired by Tom's comment, I finally made it work by just adding the certificate to the Web App in Azure App Service, instead of trying to use it from the filesystem. First I uploaded the .pfx file in the SSL Certificates section in Azure. Then, in the App settings, I added a setting called WEBSITE_LOAD_CERTIFICATES, with the thumbprint of the certificate I wanted (the .pfx).
After that, I modified my code to do work like this:
private void InitWebSockesClient()
{
client = new ClientWebSocket();
client.Options.SetRequestHeader(HEADER_KEY, HEADER_VALUE); //Some headers I need
AddCertificateToWebSocketsClient();
}
private void AddCertificateToWebSocketsClient()
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls11
| SecurityProtocolType.Tls12;
// this should really validate the cert
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
// reading cert from store
X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser);
certStore.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certCollection =
certStore.Certificates.Find(X509FindType.FindByThumbprint,
CERTIFICATES_THUMBPRINT,
false);
if (certCollection.Count > 0)
{
client.Options.ClientCertificates = certCollection;
}
else
{
// handle error
}
certStore.Close();
}
Where CERTIFICATES_THUMBPRINT is a string (thumbsprint of your certificate, the one you saw on Azure).
In case you want to make it work locally, you just need to install the certificate on your computer, as otherwise it won't obviously find it on the store.
Reference for all this in Azure docs: https://learn.microsoft.com/en-us/azure/app-service/app-service-web-ssl-cert-load.