HA Proxy Configuration with MariaDB Galera cluster - mariadb

I have a 3 node MariaDB Galera Cluster (MDB-001,MDB-002,MDB-003) with HA Proxy and two databases (DB1,DB2)
I am connecting to MariaDB Galera Cluster through HA Proxy and HA Proxy configured to route connections to least loaded node.
Now I need such a configuration that Transactions on DB1 should go to MDB-001 and Transactions on DB2 should go to MDB-002.
In the event of MDB-001 down then connection of DB1 should be routed to least connections node and similarly when MDB-002 down then
connection of DB2 should be routed to least connection node.
Could you please help me to for setting such configuration.

Related

How to connect to on-premise OpenVPN server from OCI (Oracle Cloud Infrastructure) Compute instance?

My company has an on-premise network which is opened by OpenVPN server.
In the ordinary scenarios, I used to connect to that server very easily.
However, when I tried to that server from the OCI compute instance which I connected by SSH from my laptop, there exist some problems. As soon as I try to connect VPN server, my SSH connection is closed.
IMHO, this may occurred because VPN connection changes network information and so my SSH connection might be lost.
I tried to look around to find out how to connect to VPN from OCI, but almost everything was using IPSec protocol which Oracle provided, others were about builting OpenVPN Server on the OCI instance.
I'm very novice for the network structure. So, please give me some hint to resolve this problem.
Thanks,
I get the following:
You have Ubuntu 18.04 VM on a Public Subnet in OCI
You have OpenVPN Server running on On-Prem.
You would like to access your On-Prem from Ubuntu VM on OCI.
If I understood it correctly, the best way is to set up IPSec VPN. It isn't that hard if you hit right steps. At the high level, you will be doing the following steps. I have used IKEv1 in my attempts in the past.
OCI:
Create a DRG
Attach/Associate it to your VCN
Create a CPE (Customer Premise Equipment) and mark the IP Address of OpenVPN server to it.
Create an IPSec Connection on the DRG. It will create two Tunnels with its own Security Information.
Set up Routing on associated subnet (i.e., one that hosts Ubuntu VM) so traffic associated to On-Prem CIDR are routed to DRG.
On-Prem:
Create necessary configuration to create the Tunnels upto OCI (Using the configuration information from previous steps such as VPN Server IP Addresses and Shared Secrets)
Set up Routing so that the Traffic destined for OCI CIDR ranges are sent to associated Tunnel Interface
This ensures that you can create multiple VMs on the OCI Subnet all of which can connect to your On-Prem infrastructure. OCI Documentation has sufficient information in setting up this VPN Connection.
Alternatively if your only requirement is to establish connectivity between Ubuntu VM on OCI to OpenVPN server On-Prem, you might use any VPN Client software and set it up. This doesn't need any of the configuration steps mentioned above.
Worker nodes in private subnets have private IP addresses only (they do not have public IP addresses). They can only be accessed by other resources inside the VCN. Oracle recommends using bastion hosts to control external access (such as SSH) to worker nodes in private subnets. You can learn more on using SSH to connect through a bastion host here - https://docs.cloud.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/bastion-hosts.pdf

Kubernetes: unable to get connected to a remote master node "connection refused"

Hello I am facing a kubeadm join problem on a remote server.
I want to create a multi-server, multi-node Kubernetes Cluster.
I created a vagrantfile to create a master node and N workers.
It works on a single server.
The master VM is a bridge Vm, to make it accessible to the other available Vms on the network.
I choose Calico as a network provider.
For the Master node this's what I've done:
Using ansible :
Initialize Kubeadm.
Installing a network provider.
Create the join command.
For Worker node:
I execute the join command to join the running master.
I created successfully the cluster on one single hardware server.
I am trying to create regular worker nodes on another server on the same LAN, I ping to the master successfully.
To join the Master node using the generated command.
kubeadm join 192.168.0.27:6443 --token ecqb8f.jffj0hzau45b4ro2
--ignore-preflight-errors all
--discovery-token-ca-cert-hash
sha256:94a0144fe419cfb0cb70b868cd43pbd7a7bf45432b3e586713b995b111bf134b
But it showed this error:
error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.0.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 192.168.0.27:6443: connect: connection refused
I am asking if there is any specific network configuration to join the remote master node.
Another issue I am facing: I cannot assign a public Ip to the Vm using the bridge adapter, so I remove the static ip to let the dhcp server choose one for it.
Thank you.

Rancher Unable to connect Cluster Ip from another project

I'm using rancher 2.3.5 on centos 7.6
in my cluster the "project Network isolation" is enable.
I have 2 projects:
In the projet 1, I have deployed one apache docker that listens to port 80 on cluster IP
[enter image description network isolation config
In the second project, I unable to connect the projet 1 cluster IP
Is the project Network isolation block also the traffic for the cluter IP between the two projects.
Thanks you
Other answers have correctly pointed out how a ClusterIP works from the standpoint of just Kuberentes, however the OP specifies that Rancher is involved.
Rancher provides the concept of a "Project" which is a collection of Kubernetes namespaces. When Rancher is set to enable "Project Network Isolation", Rancher manages Kubernetes Network Policies such that namespaces in different Projects can not exchange traffic on the cluster overlay network.
This creates the situation observed by the OP. When "Project Network Isolation" is enabled, a ClusterIP in one Project is unable to exchange traffic with a traffic source in a different Project, even though they are in the same Kubernetes Cluster.
There is a brief note about this by Rancher documentation:
https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/
and while that document seems to limit the scope to Pods, because Pods and ClusterIPs are allocated from the same network it applies to ClusterIPs as well.
K8s Cluster IPs are restricted for communication with in the cluster. A good read on the CLusterIP, Node Port and the load balancer can be found at https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other .
If your intention is to make the services in the 2 different cluster communicate, then go for the below methods.
Deploy a overlay network for your Nodegroup
Cluster peering

Needed ports for Kubernetes cluster

Suppose I want to create a k8s cluster on bare metal servers, with 1 master and 2 nodes. What ports do I have to open in my firewall so that the master and nodes can communicate over the Internet? (I know I can just use VPN, but I just want to know which ports I need). I guess I need at least the following ports. Do I need more? How about if I'm using Flannel or Calico? I want to create a comprehensive list of all possible k8s services and needed ports. Thank you.
kubectl - 8080
ui - 80 or 443 or 9090
etcd - 2379, 2380
the ports for kubernetes are the following:
from the CoreOS docs.
Kubernestes needs:
Master node(s):
TCP 6443* Kubernetes API Server
TCP 2379-2380 etcd server client API
TCP 10250 Kubelet API
TCP 10251 kube-scheduler
TCP 10252 kube-controller-manager
TCP 10255 Read-Only Kubelet API
Worker nodes (minions):
TCP 10250 Kubelet API
TCP 10255 Read-Only Kubelet API
TCP 30000-32767 NodePort Services
Providing that the API server, etcd, scheduler and controller manager run on the same machine, the ports you would need to open publicly in the absence of VPN are:
Master
6443 (or 8080 if TLS is disabled)
Client connections to the API server from nodes (kubelet, kube-proxy, pods) and users (kubectl, ...)
Nodes
10250 (insecure by default!)
Kubelet port, accepts connections from the API server (master).
Also nodes should be able to receive traffic from other nodes and from the master on pretty much any port, on the network fabric used for Kubernetes pods (flannel, weave, calico, ...)
If you expose applications using a NodePort service or Ingress resource, the corresponding ports should also be open on your nodes.

Can't access port 7474 across Rackspace cloud servers

I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.

Resources