How to create nginx ingress in terraform - aks - nginx

How can i create a nginx ingress in azure kubernetes using terraform, earlier in this link , i remember seeing some steps as a mandatory installation for all setups, right now it seems to be removed and there is a specific way of installing for aks in this link, should i rewrite all these to adapt to terraform or is there any other smart way of installing nginx ingress for aks through terraform

You could try using Terraform's helm provider.
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.your_cluster.kube_config.0.host
client_key = base64decode(azurerm_kubernetes_cluster.your_cluster.kube_config.0.client_key)
client_certificate = base64decode(azurerm_kubernetes_cluster.your_cluster.kube_config.0.client_certificate)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.your_cluster.kube_config.0.cluster_ca_certificate)
}
}
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "nginix_ingress" {
name = "nginx_ingress"
repository = data.helm_repository.stable.metadata.0.name
chart = "stable/nginx-ingress"
namespace = "kube-system"
}
If your cluster is already created, you will have to import it as well using a data source. helm_release also supports custom values. Here is the link if you need more information.

There is a nice tutorial Create an Application Gateway ingress controller in Azure Kubernetes Service. And you can check GitHub for Application Gateway Ingress Controller.
If you are using Terraform version 0.12 or higher you can use terraform provider kubernetes example.
As for the Terraform documentation you should check Data source kubernetes_ingress and Resource kubernetes_ingress.
If you provide more details I'll update the answer.

Updated answer
resource "helm_release" "nginix-ingress" {
name = "nginix-ingress"
repository = "https://charts.bitnami.com/bitnami"
chart = "nginx"
namespace = "kube-system"
}

I offer an alternative, in my opinion better, way to provision Kubernetes services like Nginx ingress using Terraform.
My kustomize based modules have two main benefits over using helm based modules:
Patching instead of templating makes maintaining custom configuration easier across new upstream versions
My kustomization provider, unlike the helm provider, shows detailed diffs and even destroy/recreate during terraform plan
If you're interested, here's a detailed comparison between my kustomize based Nginx ingress Terraform module and a helm based module.
Using the module is straight forward:
# require and configure provider
terraform {
required_providers {
kustomization = {
source = "kbst/kustomization"
}
}
}
provider "kustomization" {
alias = "example"
kubeconfig_path = "~/.kube/config"
}
# call module
module "example_nginx" {
providers = {
# we're using the alias provider we configured above
kustomization = kustomization.example
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0" # find the latest version on https://www.kubestack.com/catalog/nginx
# the configuration here assumes you're using Terraform's default workspace
# use `terraform workspace list` to see the workspaces
configuration_base_key = "default"
configuration = {
default = {
replicas = [{
name = "ingress-nginx-controller"
count = 5
}]
}
}
}
The example module call uses kustomize's replicas attribute to change the replicas of the Nginx ingress controller.
The module's (I have them for more than just Nginx) allow defining the kustomization as part of the module call. They also bundle an upstream release, and you control the version using the version attribute.
Documentation for all available kustomization attributes can be found on the Kubestack website.
I maintain these modules as part of Kubestack, my open-source framework for platform teams working with Terraform and Kubernetes.

Related

kong error using deck: cannot create or update 'services' entities when not using a database

I have set up kong in dbless mode on RHEL by following the below documentation
https://docs.konghq.com/gateway/latest/install-and-run/rhel/
Kong gateway is successfully started. Below are the configurations I added in kong.conf file where database is turned to off and path to declarative kong.yaml is specified
declarative_config = /temp/kong/kong.yml
database = off
Also, below is current .yaml file where I created a service using below link
https://docs.konghq.com/gateway/2.8.x/get-started/comprehensive/expose-services/
_format_version: "1.1"
services:
- host: mockbin.org
name: example_service
port: 80
protocol: http
routes:
- name: mocking
paths:
- /mock
strip_path: true
I have also installed deck to sync this the declarative configuration.
However, when I use the deck sync command to add this service to kong, I get below error
creating service example_service
Summary:
Created: 0
Updated: 0
Deleted: 0
Error: 1 errors occurred:
while processing event: {Create} service example_service failed: HTTP status 405 (message: "cannot create or update 'services' entities when not using a database")
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
You are correct that we can create a service in dbless mode, however the approach will be different.
If you already have the new config file in yaml format. you can load it to Kong using /config endpoint
I also think that decK should be process-agnostic and can be used with both db and dbless mode, But as it stands, loading yaml config file with /config endpoint looks like the best option.

Issues after upgrading DB to mariaDB

I have recently built my rundeck server and created a DB using mariaDB and pointed rundeck to this. I followed the official documentation for this on the rundeck site. Since I have changed from the systemDB to mariaDB the service no longer starts.
My rundeck-config.properties file looks like this:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/var/lib/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
#change hostname here
grails.serverURL=http://IP OF SERVER:4440
dataSource.driverClassName=
dataSource.url = jdbc:mysql://IP OF SERVER/rundeck?autoReconnect=true&useSSL=false
dataSource.username = DB User
dataSource.password = Password
grails.plugin.databasemigration.updateOnStart=true
autoReconnect=true
#to store projects on backend
rundeck.projectsStorageType=db
#Encryption for key storage
rundeck.storage.provider.1.type=
rundeck.storage.provider.1.path=keys
rundeck.storage.converter.1.type=jasypt-encryption
rundeck.storage.converter.1.path=keys
rundeck.storage.converter.1.config.encryptorType=custom
rundeck.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.storage.converter.1.config.provider=BC
#Encryption for project config storage
rundeck.projectsStorageType=db
rundeck.config.storage.converter.1.type=jasypt-encryption
rundeck.config.storage.converter.1.path=projects
rundeck.config.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.config.storage.converter.1.config.encryptorType=custom
rundeck.config.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.config.storage.converter.1.config.provider=BC
rundeck.feature.repository.enabled=true
Can anyone help with this
A couple of things here:
Your dataSource.driverClassName is empty, set it to org.mariadb.jdbc.Driver, check the full example here.
Your rundeck.storage.provider.1.type is also empty, set it as rundeck.storage.provider.1.type=db.

Cannot create wordpress web app in azure with Terraform

I am trying to create wordpress web application on Azure with Terraform. Each web app has own database. I manage to create resource groups, database server and databases but i cannot create wordpress web app. I can create a web app and all works fine but not wordpress. When i make wordpress web app manually and import data to see what is different i see that wordpress has repo_url and branch pointing to wordpress-azure repo on github. When i try to incorporate this in code i get error message.
resource "azurerm_mysql_database" "testtt" {
name = "testtt"
resource_group_name = azurerm_resource_group.RG_mok_2024.name
server_name = azurerm_mysql_server.wp-db-mok-2024.name
charset = "utf8"
coll`enter code here`ation = "utf8_unicode_ci"
}
resource "azurerm_app_service" "testtt" {
name = "testtt"
location = azurerm_resource_group.RG_mok_2024.location
resource_group_name = azurerm_resource_group.RG_mok_2024.name
app_service_plan_id = azurerm_app_service_plan.appserviceplan-wordpress-mok-6.id
site_config {
dotnet_framework_version = "v4.0"
scm_type = "GitHub"
default_documents = ["Default.htm","Default.html","Default.asp","index.htm","index.html","iistart.htm","default.aspx","index.php","hostingstart.html"]
}
source_control {
repo_url = "https://github.com/azureappserviceoss/wordpress-azure"
branch = "master"
}
connection_string {
name = "defaultConnection"
type = "MySQL"
value = "Database=testtt;Data Source=wp-db-mok-2024.mysql.database.azure.com;User Id=mysqladminun#wp-db-mok-2024;Password=password"
}
}
The error message i get when i am using source_control part of a code is
Error: "source_control": this field cannot be set
The source_control field is only exported. It cannot be used to connect a deploymentsource.
To execute an automated deployment directly, there is currently only the workaround via a local-exec null_resource.
The sourcecontrol integration can be created using an azure cli / powershell script which is then executed by the local-exec provisioner.
This works as follows:
resource "null_resource" "scm_integration" {
provisioner "local-exec" {
command = "${path.module}/enable_scm.ps1 -webAppName ${azurerm_app_service.testtt.name} -appResourceGroupName ${azurerm_resource_group.RG_mok_2024.name} -scmBranch master -repoUrl https://github.com/azureappserviceoss/wordpress-azure"
interpreter = ["pwsh", "-Command"]
}
}
In addition you need the powershell script enable_scm.ps1.
In this GitHub Issue the workaround incl. script is described completely
According to terraform doc about azurerm app service, the field source_control is only exported. And it is ONLY exported when the scm_type is set to LocalGit. You have set it to GitHub, and it is an output value, so according to the documentation, you dont need that.
Furthermore in line 6 there is enter code here but I guess that this was pasted here by accident.
Finally, I hope that in the connection string value, your database password is not "password".
Can you try to set the source_control section before the site_config? There is an open issue for the Terraform azurerm_app_service provider that suggests this might be a work-around.
https://github.com/terraform-providers/terraform-provider-azurerm/issues/3696

Unable to network-bootstrap using enterprise 3.2 version, due to 73 outstanding database changes

I am using enterprise version 3.2 of network bootstrapper to build node configurations with devMode enabled. When i bootstrap with default database backend (h2), it works fine.
But when I am connecting to MSSQL DB backend, it fails to generate the node config with the following error.
"There are 73 outstanding database changes that need to be run. Please use the advanced migration tool. See: https://docs.corda.r3.com/database-management.html"
I do not have any apps placed in the directory during my bootstrapping process.
The database is a new one and there are no tables created yet. Yet, it is complaining about the database changes.
The link mentioned in the error, recommends us to do a database migration, specific to a cordapp. But in my case, i do not even have a cordapp.
How can I overcome this issue?
Here is the config file i used:
myLegalName="O=Branch,L=Bangalore,C=IN"
p2pAddress="192.168.100.104:11121"
devMode=true
rpcSettings {
address="192.168.100.104:10011"
adminAddress="192.168.100.104:11252"
}
rpcUsers=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
dataSourceProperties = {
dataSourceClassName = "com.microsoft.sqlserver.jdbc.SQLServerDataSource"
dataSource.url = "jdbc:sqlserver://192.168.100.116:1433;databaseName=cordadb"
dataSource.user = "adminuser"
dataSource.password = "Password123"
}
database = {
transactionIsolationLevel = READ_COMMITTED
}
jarDirs = ["/root/jdbcdriver/sqljdbc_6.2/enu/"]
Here is the command line that was invoked:
java -jar corda-tools-network-bootstrapper-3.2.jar --dir finance
The "73 outstanding database changes" referenced in the error message are the creation of the new database tables required by every Corda node.
You can run these automatically by adding database.runMigration=true to your node's node.conf file.

Launch RancherOs with Openstack and Terraform

Hi running the latest OpenStack, Terraform and RancherOs.
From the Openstack UI I can get rancher to work and I can pass in my own ssh keys for instance but you need to explicitly click the configuration drive otherwise it will not accept the user data.
I don't think this is possible with terraform is it?
resource "openstack_compute_instance_v2" "terraform-rancher" {
name = "terraform-rancher"
image_name = "RancherOs"
flavor_name = "t2.xlarge"
security_groups = ["default"]
#This is on the same path as my terraform file.
user_data = "${file("test.txt")}"
network {
name = "provider"
}
}
The instance launches and gets created but when I look at the logs Rancher cannot seem to find the config with:
cloud-init: Datasource unavailable, skipping: cloud-drive: /media/config-2 (lastError: no such file or directory)
From Openstack UI it works fine, but as stated you have to click the config drive check box.
cloud-init: Datasource available: cloud-drive: /media/config-2
To get it work like in the UI, the config_drive parameter in the instance configuration needs to be set to true.

Resources