I need to implement a distributed Cache. For this I'm trying with Infinispan.
I have 2 nodes, that are separated physically. Each node is located in a Server, and the 2 servers can ping each other successfully.
In the configuration file "jgroups-tcp.xml" (JGroups), for configuring the cluster, I did the following :
<TCP
bind_addr="${jgroups.tcp.address:XX.XX.AA.AA}"
bind_port="${jgroups.tcp.port:7800}"
//...
/>
<TCPPING timeout="3000"
initial_hosts="XX.XX.AA.AA[7800],XX.XX.BB.BB[7801]"
port_range="5"
num_initial_members="2"
ergonomics="false"
/>
And I commented the element .
Running the application with this configuration works in the machine whose IP is XX.XX.AA.AA, but it doesn't for the other machine XX.XX.BB.BB, and I get this error :
org.infinispan.commons.CacheException: java.net.BindException: [TCP] /XX.XX.AA.AA is not a valid address on any local network interface.
For information, for the moment, I'm creating the nodes from a static main method. So I don't think that I would need to involve JBosss configurations...
Thank you a lot!
On the BB node, you have to setup bind_addr XX.XX.BB.BB - I assume that you have done that, although you don't mention two configurations. But, probably, you've made some mistake.
If you don't want to keep two configuration files, you can set bind_addr="${jgroups.tcp.address}" and then use -Djgroup.tcp.address=XX.XX.AA.AA (BB.BB, respectively) on the command line when starting JVMs.
Related
I have more than 900 receive locations associated with the same host.
All receive locations are enabled but sometimes some of them are not working (and are still enabled).
When I disabled and re-enabled it, the receive location works but another one is going into trouble.
Are there any known limitations of the number of receive locations that can be associated with the same host in BizTalk 2016?
I don't know if there is a limitation number, but if you associate all the receive locations to the same Host, problably your problems are due to the Throttling mechanism.
While there are no hard limits to Receive Locations or Send Ports, there are still practical limits based on available resources.
900 is a lot for a single Host. Even if everything was running perfectly, I would still break that up across ~3 Hosts.
If these are File Receive Locations, there are other techniques to reduce the amount even more. Some options:
Use a Windows Scheduler task to move files from various locations to fewer, or maybe one location. If 'source' information is necessary, you can add a tag to the file name which can be extracted in a custom Pipeline Component.
Modify the sample File Adapter in the SDK to scan sub-folders as well. You can combine this with option 1 if you cannot modify the filename for some reason.
Similar to option 1, the script can write a meta-data file before moving the file with any data you need to preserve. The meta-data can then be read in a Pipeline Component.
I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)
I am not good with Ohai. I would like to know if there is any way to find out all ip address (including own) of nodes from respective subnet through Chef recipe.
I have created one layer in AWS Opswork and want to add each node's ip addr and hostname in the configuration file. Any help will highly appreciated.
So this depends a little bit on if you want to get all the instances in a layer, or all the total instances in your stack.
For the first, something like this - untested! - code for your recipe should work:
my_layer_name = "my_database_layer_or_whatever"
node[:opsworks][:layers][my_layer_name][:instances].each do |current_instance, current_instance_data|
puts node[:opsworks][:layers][my_layer_name][:instances][current_instance][:private_dns_name]
end
Note that this will get you the private dns name - so internal to the OpsWorks network. You may or may not want that - there are a dozen or so other attributes on the object, including the public IP address.
If you wanted to get instances for the entire stack, I'm betting you could loop through node[:opsworks][:layers], as I've looped through the instances here. Just another loop.
Also note this code is for Chef 11. In Chef 12 things have changed a bit.
If you're using Chef 12, I found the documentation on how to use/search the Chef Data Bags for OpsWorks.
By reading documents on MSDN, I realized that it is recommended to create separate hosts by functionality (Sending hosts, Receiving hosts and Processing hosts). And if there is only one host in this bizTalk server, this host can perform all receiving, sending, and processing messages functionality.
My question is: Is it possible to have multiple hosts that each host can perform its own sending, receiving and processing function , and not affect each other?
This is for multiple developers working on the same project, because our current situation doesn't allow us to have a full set of SQL Server Database and SQL server for each developer or using VM.
Thanks a lot!
Multiple hosts is not a solution for letting multiple developers work on a single server. A single send/receive adapter can only be assigned to one host.
You will also run into other problems, as all the configuration settings are shared in a single database, a change from 1 developer will effect the others.
This same question was asked and answered at MSDN. What you are trying to do is not supported and will not work. There is no way around this.
You must deploy the same application code to each computer in a BizTalk Group.
Sharing a BizTalk computer for development work is not a workable or productive solution and will have a definite negative affect on productivity.
You are correct, the best way to handle DEV is a VM with the entire stack. This is the issue you must address in your environment.
I am aware that nodes can be started from the shell. What I am looking for is a way to start a remote node from within a module. I have searched, but have been able to find nothing.
Any help is appreciated.
There's a pool(3) facility:
pool can be used to run a set of
Erlang nodes as a pool of
computational processors. It is
organized as a master and a set of
slave nodes..
pool:start/1,2 starts a new pool.
The file .hosts.erlang is read to
find host names where the pool nodes
can be started. The slave nodes are
started with slave:start/2,3,
passing along Name and, if provided,
Args. Name is used as the first
part of the node names, Args is used
to specify command line arguments.
With pool you get load distribution facility for free.
Master node may be started this way:
erl -sname poolmaster -rsh ssh
Key -rsh here specifies an alternative to rsh for starting a slave node on a remote host. We used SSH here. Make sure your box have working SSH keys, and you can authenticate to the remote hosts using these keys.
If there are no hosts in the file .hosts.erlang, then no slave nodes are started, and you can use slave:start/2,3 to start slave nodes manually passing arguments if needed.
You could, for example start a remote node:
Arg = "-mnesia_dir " ++ M,
slave:start(H, Name, Arg).
Ensure epmd(1) is up and running on the remote boxes in order to start Erlang nodes.
Hope that helps.
A bit more low level that pool is the slave(3) module. Pool builds upon the functionality in slave.
Use slave:start to start a new slave.
You should probably also specify -rsh ssh on the command-line.
So use pool if you need the kind of functionality it offers, if you need something different you can build it yourself out of slave.