Open Source Glassfish 3.1.1 and clustering - glassfish-3

I want to tests running web application on cluster to get information about perfomance. I've created cluster with two nodes. I've also deployed application on 2 nodes, and in administration panel GlassFish displays IPs for nodes, from which I can get acces to application.
Cluster info page gives information about:
Multicast Port: 2048
Multicast Address: 228.9.3.1
but if I type given address and port resources aren't found. I changed this values to my host IP and port, but the problem is the same.
Am I doing something wrong? Should I do something more to set up cluster IP and port?

In order to configure your clusters ports you should check the instance properties.
HereĀ“s the path:
Clusters -> <your cluster name> -> Instances -> <your instance name> -> Properties
And then change to values you want.

Related

Istio Primary Remote, Different Network. Setting Remote pilot address (in cluster 2) is bind to port 15012 (for xDs)

I am following this guide for primary remote setup on different network.
https://istio.io/latest/docs/setup/install/multicluster/primary-remote_multi-network/
Instead of using the load balancer on primary to expose the istiod, I am trying to use NodePort. Also verified it using netcat and it works. Now on remote machine when I configure using IstioOperator there is an option for remotePilotAddress but no option for the port i.e it get binded to default 15012. how can I change it to nodePort on which I have exposed the control plane istiod.
So far I couldn't do something like this, But for home setup you can bind the service with the reachable address i.e the node ip on which the istiod is running
you can do this via this command
kubectl patch svc istio-eastwestgateway -p '{"spec":{"externalIPs":["x.x.x.x"]}}' -n istio-system

Kubernetes: unable to get connected to a remote master node "connection refused"

Hello I am facing a kubeadm join problem on a remote server.
I want to create a multi-server, multi-node Kubernetes Cluster.
I created a vagrantfile to create a master node and N workers.
It works on a single server.
The master VM is a bridge Vm, to make it accessible to the other available Vms on the network.
I choose Calico as a network provider.
For the Master node this's what I've done:
Using ansible :
Initialize Kubeadm.
Installing a network provider.
Create the join command.
For Worker node:
I execute the join command to join the running master.
I created successfully the cluster on one single hardware server.
I am trying to create regular worker nodes on another server on the same LAN, I ping to the master successfully.
To join the Master node using the generated command.
kubeadm join 192.168.0.27:6443 --token ecqb8f.jffj0hzau45b4ro2
--ignore-preflight-errors all
--discovery-token-ca-cert-hash
sha256:94a0144fe419cfb0cb70b868cd43pbd7a7bf45432b3e586713b995b111bf134b
But it showed this error:
error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.0.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 192.168.0.27:6443: connect: connection refused
I am asking if there is any specific network configuration to join the remote master node.
Another issue I am facing: I cannot assign a public Ip to the Vm using the bridge adapter, so I remove the static ip to let the dhcp server choose one for it.
Thank you.

Kubernetes load balancer service packet source IP

I have a kubernetes 1.13 version cluster(a single node at the moment) set up on bare metal with kubeadm. The node has 2 network interfaces connected to it for testing purposes. Ideally, in the future, one interface should be facing the intranet and the other the public network. By then the number of nodes will also be larger than one.
For the intranet ingress I'm using HAProxy's helm chart ( https://github.com/helm/charts/tree/master/incubator/haproxy-ingress ) setup with this configuration:
rbac:
create: true
serviceAccount:
create: true
controller:
ingressClass: "intranet-ingress"
metrics:
enabled: true
stats:
enabled: true
service:
type: LoadBalancer
externalIPs:
- 10.X.X.X # IP of one of the network interfaces
service:
externalIPs:
- 10.X.X.X # IP of the same interface
The traffic then reaches haproxy as follows:
1. Client's browser, workstation has an IP from 172.26.X.X range
--local network, no NAT -->
2. Kubernetes server, port 443 of HAProxy's load balancer service
--magic done by kube-proxy, possibly NAT(which shoudn't have been here)-->
3. HAProxy's ingress controller pod
The HAProxy access logs shows the source IP of 10.32.0.1. This is an IP from the kubernete's network layer. Kubernetes pod CIDR is 10.32.0.0/12. I, however, need the access log to show the actual source IP of the connection.
I've tried manually editing the loadbalancer service created by HAProxy and setting the externalTrafficPolicy: Local. That did not help.
How can I get the source IP of the client in this configuration?
I've fixed the problem, turns out there were a couple of issues in the original configuration that I had.
First, I didn't mention what's my network provider. I am using weave-net, and it turns out that even though kubernetes documentation states that for preserving source IP it's enough to add externalTrafficPolicy: Local to the load balancer service it wouldn't work with weave-net unless you enable it specifically. So, on the version of weave-net I'm using(2.5.1) you have to add the following environment variable to weave-net DeamonSet NO_MASQ_LOCAL=1. For more details refer to their documentation.
Honestly, after that, my memory is a bit fuzzy, but I think what you get at this stage is a cluster where:
NodePort service: does not support source IP preservation. Somehow this works on AWS but is not supported on bare metal by kubernetes itself, weave-net is not at fault.
LoadBalancer service on the node with IP X bound to an IP of another node Y: does not support source IP preservation as traffic has to be routed inside the kubernetes network.
LoadBalancer service on a node with IP X bound to the same IP X: I don't remember clearly, but I think this works.
Second, the thing is that kubernetes, out of the box, does not support true LoadBalancer services. If you decide to stick with "standard" setup without anything additional, you'll have to restrict your pods to run only on nodes of the cluster that have the LB IP addresses bound to them. This makes managing a cluster a pain in the ass as you're becoming very dependant on the specific arrangement of components on the nodes. You also lose redundancy.
To address the second issue, you have to configure a load balancer implementation provider for bare metal setup. I personally used MetalLB. With it configured, you give the load balancer service a list of IP addresses which are virtual, in the sense that they are not attached to a particular node. Every time kubernetes launches a pod that accepts traffic from the LB service; it attaches one of the virtual IP addresses to that same node. So, the LB IP address always moves around with the pod and you never have to route external traffic through the kubernetes network. As a result, you get 100% source IP preservation.

How to get associated ip address in openstack instance

I am trying to setup a consul server in an openstack cluster. I have the server provisioned and have associated an IP with the server that is accessible from vagrants on developer machines.
I am able to join the server from a local vagrant if I use the -advertise flag on the consul agent -server command and use the floating ip I set. However, I am provisioning the server with salt and need to the machine to be able to determine that IP automatically.
By default, the server is using its bind address which is set to its 10.x.x.x local IP. That local IP is the only one I seem to be able to easily determine.
Is there a way to get an instance's floating ip(s)?
Bonus points: Is there a way to get an instances name?
The information you are looking for is available to an instance using the Openstack metadata service. It is basically a REST API that an instance can hit to get information specific to this instance. See more information here:
http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
You should be able to get both the instance name and its floating ip (look for "public-ipv4")

How does open stack assign ip to virtual machines?

I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.

Resources