Cellar clustering in servicemix - apache-karaf

Installed cellar in 2 ServiceMix in 2 different PCs using following commands.
features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.3.4/xml/features
features:install cellar
Made IP address of node 2 entry in <tcp-ip> tag of hazelcast.xml in node 1
I Started servicemix in PC 1 : Node 1
Started servicemix in PC 2 : Node 2
Checked the clustering discovery using cluster:node-list in both the nodes. Both lists both the Node 1 and Node 2.
I do not know how to test this clustering.

Related

Bespoke affinity maps (process bindings) in mpich

I am implementing an application using MPICH (sudo apt get mpich) on Linux (Ubuntu).
My current solution looks like this:
HYDRA_TOPO_DEBUG=1 mpiexec.hydra -n 3 -bind-to core:1 MyApp
...
process 0 binding: 10001000
process 1 binding: 01000100
process 2 binding: 00100010
What I want, however, is assigning one process to 4 cores, while the other two are assigned to 2. I want an affinity map that looks like this:
process 0 binding: 11001100
process 1 binding: 00100010
process 2 binding: 00010001
Using SMPD on Windows, I was able to obtain the required result using sth like this:
mpiexec -n 1 -host localhost --bind-to core:2 MyApp : -n 2 -host localhost --bind-to core:1 MyApp
This however does not work with Hydra. I read every manual by now and would be happy regarding any help - even if its another hydra manual that I did not read yet. Cheers!
The "user" keyword can be used to assign logical cores manually.
Hence, one can write:
HYDRA_TOPO_DEBUG=1 mpiexec.hydra -n 3 -bind-to user:0+1+4+5,2+6,3+7 MyApp
Then, I obtain:
process 0 binding: 11001100
process 1 binding: 00100010
process 2 binding: 00010001

Mirantis Openstack Fuel unable to provision nodes with VIRT role

I need to reproduce MOS 9.2 installation.
So, previously MOS 9.2 was installed on 7 baremetal servers with such roles:
2 - compute
3 - virt (Looks like 3 controllers were deployed as a virtuals)
2 - ceph
I've successfully installed fuel master, updated it to 9.2, created environment and now I need to add nodes with appropriate roles, but when I'm trying to assign role VIRT to 3 physical servers I'm getting an error:
# fuel2 env add nodes -e 6 -n 9 -r virt
400 Client Error: Bad Request for url: http://MYIP:8000/api/v1/clusters /6/assigment/ (Role 'virt' restrictions mismatch: )
When I'm trying to define 3 nodes:
# fuel2 env add nodes -e 6 -n 9,10,11 -r virt
fuel2 env add nodes: error: argument -n/--nodes: invalid int value: '9,10,11'
Also I didn't find role 'virt' in Fuel web UI
I fixed this issue by editing /etc/nailgun/settings.yaml:
FEATURE_GROUPS:
- "advanced"

ipvsadm -L -n suddenly showing no active connections

I have a very odd problem in a proxy cluster of four Squid proxies:
One of the machine is the master. The mater is running ldirectord which is checking the availability of all four machines, distributing new client connections.
All over a sudden, after years of operation I'm encountering this problem:
1) The machine serving the master role is not being assigned new connections, old connections are served until a new proxy is assigned to the clients.
2) The other machines are still processing requests, taking over the clients from the master (so far, so good)
3) "ipvsadm -L -n" shows ever-decreasing ActiveConn and InActConn values.
Once I migrate the master role to another machine, "ipvsadm -L -n" is showing lots of active and inactive connections, until after about an hour the same thing happens on the new master.
Datapoint: This happened again this afternoon, and now "ipvsadm -L -n" shows:
TCP 141.42.1.215:8080 wlc persistent 1800
-> 141.42.1.216:8080 Route 1 98 0
-> 141.42.1.217:8080 Route 1 135 0
-> 141.42.1.218:8080 Route 1 1 0
-> 141.42.1.219:8080 Route 1 2 0
No change in the numbers quite some time now.
Some more stats (ipvsadm -L --stats -n):
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 141.42.1.215:8080 1990351 87945600 0 13781M 0
-> 141.42.1.216:8080 561980 21850870 0 2828M 0
-> 141.42.1.217:8080 467499 23407969 0 3960M 0
-> 141.42.1.218:8080 439794 19364749 0 2659M 0
-> 141.42.1.219:8080 521378 23340673 0 4335M 0
Value for "Conns" is constant now for all realservers and the virtual server now. Traffic is still flowing (InPkts increasing).
I examined the output of "ipvsadm -L -n -c" and found:
25 FIN_WAIT
534 NONE
977 ESTABLISHED
Then I waited a minute and got:
21 FIN_WAIT
515 NONE
939 ESTABLISHED
It turns out that a local bird installation was injecting router for the IP of the virtual server and thus taking precedence over ARP.

Multiple provider network management on different neutron nodes

I want to install neutron server on different Nodes. In my environment there will be 3 provider networks name provider1, provider2 and provider3 with respectively. All of them will be flat network. In my system, I want each neutron server manages different provider networks (neutron1 only controls provider1, neutron2 controls provider2 and neutron3 controls provider3). VMs will have internal networks (overlay network) and use Virtual Routers to access provider networks. The interface mapping on neutron servers are as given below:
Neutron 1
Bond 0 : Management + overlay
Bond 1 : use for provider1
Neutron 2
Bond 0 : Management + overlay
Bond 1 : use for provider2
Neutron 3
Bond 0 : Management + overlay
Bond 1 : use for provider3
Virtual router(VR) is randomly scheduled across multiple OpenStack Networking nodes. My question is how I can deploy VR on specific neutron node (like VR which has GW address from provider1 will deploy on neutron1) ? or I will create high available VR, in this case VR will deploy all neutron servers. How can I select the active virtual router in this case?
I thought the DVR(Distributed Virtual Router) is helpful for your case.
I describe some differences between DVR and non-DVR based on VM access routes.
The DVR is generated Virtual Router at each compute node that has VMs to decrease overloads of Network node and SPOF.
Differences based on how to route.
VMs running node | subnet | using router at DVR | non-DVR
---------------------------------------------------------------------------------------------------------------------------------
all on the same node | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
all across multiple nodes | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
Difference when using Floating IPs. (but accessing from external to internal (SNAT) is not HA, just one node can routing it as of Ocata.)
DVR | non-DVR
-------------------------------------------------------
each DVR has each Floating IP | Just Network node only
As following configuration steps were based just a simple pattern, you need to refer the official tutorials for adopting your system.
Prerequisite: all compute nodes have installed l3, dhcp, metadata, openvswitch agents.
Enable the DVR at all compute nodes.
# vim /etc/neutron/neutron.conf
[DEFAULT]
...snip...
router_distributed = True
...snip...
Adding the l2population driver at controller node.
# /vim/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
...snip...
mechanism_drivers = openvswitch,l2population
...snip...
Configure the SNAT router on the specified compute node.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr_snat
...snip...
Configure the agent mode to DVR on the remaining compute nodes.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr
...snip..
Edit openvswitch config on all compute nodes.
# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
...snip...
l2_population = True
enable_distributed_routing = True
...snip...
Restart for chages to take effect.
On controller node.
# systemctl restart neutron-server
On all compute nodes.
# systemctl restart neutron-l3-agent neutron-openvswitch-agent
I hope this will help you.

MPICH2 Hydra round robin on multicore

I need to schedule process in round robin order in my Mpi program.
I have a cluster with 8 nodes, each cluster with quad-core processor.
I use mpich2-1.4.1p1 version under Ubuntu Linux.
If using this machinefile :
node01
node02
node03
node04
node05
node06
node07
node08
and then run :
mpiexec -np 10 -machinefile host ./my-program
I have a right scheduling, rank 0 to node01, rank1 to node02, ... rank8 to node01 and finally rank 9 to node02
But I need to know if rank 0 and rank 8 run on same core or not. I need that rank 0 works on first core on node01 and rank 8 on second.
If I use a different machinefile :
node01:4
node02:4
node03:4
node04:4
node05:4
node06:4
node07:4
node08:4
and then run :
mpiexec -np 10 -machinefile host2 ./my-program
I have that rank 0,1,2,3 run on node01 . And isn't what I want.
How force Hydra to use round robin on node first and then on cores using this second machinefile ?

Resources