How am I supposed to create a private network/subnet on OVH using Terraform?
There is a common resource related to OpenStack (openstack_networking_subnet_v2) and ovh-specific (ovh_publiccloud_private_network_subnet) if you use ovh provider.
I am asking because when I follow this guide, my private network interface does not get ipv4-address assigned on interface (looks like the same problem was already described in this question: Private network creation with Terraform on OVH's Openstack). I can see an ip-addr in Horizon control-panel, but when i ssh to instance using Ext-net ipv4 addr and type ifconfig I see there is no ipv4 addr assigned for private network interface. Interface is UP but no ipv4 assigned. I just use Terraform code from guide like this:
# Create a private sub network
resource "ovh_publiccloud_private_network_subnet" "private_subnet" {
# Get the id of the resource ovh_publiccloud_private_network named
# private_network
network_id = "${ovh_publiccloud_private_network.private_network.id}"
project_id = "${var.project_id}" # With OS_TENANT_ID your tenant id's project
region = "WAW1" # With OS_REGION_NAME the OS_REGION_NAME environment variable
network = "192.168.42.0/24" # Global network
start = "192.168.42.2" # First IP for the subnet
end = "192.168.42.200" # Last IP for the subnet
dhcp = false # Deactivate the DHCP service
provider = "ovh.ovh" # Provider's name
}
resource "openstack_compute_instance_v2" "front" {
# Number of time the instance will be created
count = "${length(var.front_private_ip)}"
provider = "openstack.ovh" # Provider's name
name = "front" # Instance's name
flavor_name = "s1-2" # Flavor's name
image_name = "CoreOS Stable" # Image's name
key_pair = "${openstack_compute_keypair_v2.test_keypair.name}"
security_groups = ["default"] # Add into a security group
network = [
{
name = "Ext-Net" # Public interface name
}
,
{
# Private interface name
name = "${ovh_publiccloud_private_network.private_network.name}"
# Give an IP address depending on count.index
fixed_ip_v4 = "192.168.42.4"
}
]
}
So as I said the above example does not work for me (because I have to manually assign private ipv4-addr on interface while I would like Terraform to do it for me). Then I discovered terraform-ovh-publiccloud-network module on OVH github. Tried simple example from this repo (copy-pasted from ReadMe) and I can see that second interface on Bastion node gets Ipv4 addr assigned from private net range successfully. From module's code I can also see that openstack_networking_subnet_v2 resource is used instead of OVH-specific ovh_publiccloud_private_network_subnet? Why and what is the difference between them? Which one am I supposed to use when I write my own Terraform definition from scratch?
My goal is just to create a private network/subnet and create a compute instance with two interfaces (connected to public Ext-Net and private subnet I just created). Please provide me a short working example for OVH if you have such experience or let me know if I am missing something.
You can rent a /24 of public IPs from OVH for like $800. But you gotta do that first.
Related
I have created an EKS cluster with two public subnets, however, created one worker node (instance type = t3.small) in one of the public subnets successfully.
I am able to create Nginx deployment and nodePort service and able to query the deployment and other k8s objects and also able to access this web application using node port (<Service-public-ip:nodeport>) successfully.
I am trying to create a load balancer alb and nlb, but both are failing
The whole setup is using the terraform file. I need help to identify why lb (both types) creation is failing, and how can i fix this in my terraform files
Terraform file for network load balancer:
resource "kubernetes_service_v1" "nlb-nginx-service" {
metadata {
name = "nlb-nginx-service"
annotations = {
"service.beta.kubernetes.io/aws-load-balancer-type" = "external"
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type" = "ip"
"service.beta.kubernetes.io/aws-load-balancer-scheme" = "internet-facing"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when I describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
terraform file for application load balancer:
resource "kubernetes_service_v1" "alb-nginx-service" {
metadata {
name = "alb-nginx-service"
annotations = {
"kubernetes.io/ingress.class" = "alb"
}
}
spec {
selector = {
app = kubernetes_deployment_v1.nginx-application.spec.0.selector.0.match_labels.app
}
port {
name = "http"
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
ERROR I get when i describe the nlb service
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 52s (x6 over 3m29s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 52s (x6 over 3m28s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
PS F:\k8s-terraform-project\terraform-on-aws-ec2-main\k8s>
Steps I took to identify, but unfortunately didnt work
tried to create these services on the cluster one by one
checked the services logs but didnt not get a clue
Seems somewhere lb is not able to find a public subnet to place the lb service in aws, as it says 'could not find any suitable subnets for creating the ELB', but not aware where to mention/assign public subnet for my lb
I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).
All of the resources related to the project are in a dedicated network.
network module:
resource "google_compute_network" "my_project" {
name = "my_project"
auto_create_subnetworks = true
}
output "my_project_network_self_link" {
value = google_compute_network.my_project_network.self_link
}
I use the network in the GKE cluster (network = "${var.network_link}"):
resource "google_container_cluster" "my_project" {
name = "my_project-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b"]
network = "${var.network_link}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
Node pools omitted.
and I set the network as authorized_network in the memorystore configuration:
resource "google_redis_instance" "cache" {
name = "my_project-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
authorized_network = "${var.network_link}"
# location_id = "us-central1-a"
redis_version = "REDIS_4_0"
display_name = "my_project Redis cache"
}
variable "network_link" {
description = "The link of the network instance is in"
type = string
}
I guess that the problem is related to the network, because previously using the default network this was working fine.
Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?
I had to add the following section to the cluster module in terraform:
ip_allocation_policy {
cluster_ipv4_cidr_block = ""
services_ipv4_cidr_block = ""
}
This seems to enable the VPC-native (alias IP) property of the cluster.
I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.
I try to use Terraform to deploy some machines on an Openstack Cloud.
I have no problem to create networks, subnet, keys, security groups and rules, floating ip, network ports (with security groups attached), but, when I try to create compute instances with two NICs (network ports created before), I have a syntax error with no hint to resolve it.
Could you help me please ?
My code is:
resource "openstack_compute_instance_v2" "RNGPR-REBOND-01" {
name = "RNGPR-REBOND-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth0.id}"
access_network = true
}
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-REBOND-01.address}"
}
resource "openstack_compute_instance_v2" "RNGPR-LB-01" {
name = "RNGPR-LB-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth0.id}"
}
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-LB-01.address}"
}
And the syntax error is:
Error applying plan:
2 error(s) occurred:
* openstack_compute_instance_v2.RNGPR-REBOND-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
* openstack_compute_instance_v2.RNGPR-LB-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
.
From my experience, these error messages aren't very helpful.
I would first set TF_LOG=DEBUG and OS_DEBUG=1 wherever you are running terraform. This will print error messages that are actually beneficial.
One time I was trying to create a server with a key pair that my user didn't have access to in openstack. I was receiving the same error and didn't figure it out until Debugging was set.
Default goagent setting on 127.0.0.1:8087.I want public my goagent proxy service on 192.168.1.101:8080 so that my iphone can also visit facebook.
is there any idea?
You just need to configure the proxy.ini(in folder 'local') as below:
[listen]
ip = your-hostname
port = 8087
visible = 1
debuginfo = 0
Replace your-hostname with your real hostname.