Import MSI change port pipelines - biztalk

We're using BizTalk 2013 and we have several resources that import web services to use its schemas and ports. Some of these ports are used at the same time in several of our BizTalk applications. In administration, each of these ports is connected to the same physical port, so the scenario is several logical ports from different orchestrations and resources connected to the same physical port.
With all the system well configured, we import the MSI of some of this applications (only MSI, not bindings included) and after the import finished correctly, if we check the physical ports shared, pipelines configured (XmlReceive) are lost and pipelines PassThru are set.
Making some digging, when you make a MSI import, some binding files are created in URI .\AppData\Roaming\Microsoft\BizTalk Server\Deployment\BindingFiles for all the resources, I guess to get the current situation before import and apply it again after. In the first binding file created, pipelines are well configured for the physical ports, but in the next ones PassThru pipelines are set in the ports. After the MSI import ends, seems like the port gets pipeline configuration from some of the files with PassThru.
Of course if we make an export/import binding before/after the MSI import, it works perfectly. But it seems more a workaround than a final solution.
It will be great know if some of you have this same problem and/or some ideas where I can dig more to avoid this problem.

Related

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

WSO2 clustering in a distributed deployment

I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)

Rerouting Application Network Traffic at the Data Link Layer

Consider the following situation:
You have an application you are tesing, but in order to test the networking functionality of said program, you are required to run multiple instances of it and have them communicate with one another.
Possible solutions are:
- Run software on individual machines connected by WAN or LAN.
- Run the software on virtual machines, all on the same computer.
I do not want to use either of these methods (the reasoning is irrelevant). I want to know if there is a way that I can reroute network transmissions from the test application (ideally in any programmming language) in a way such that I can run multiple instances of the same software on one computer, and have them behave as if they were the only instance running on that computer.
In other words, I want to be able to code the application so that each instance listens on the same "listening" port (since only one instance will be running on each computer when in production). Then, I want to know if I can reroute the network requests at a lower level then the application so that they do not interfere with eachother (clash over the same port number).
Essentially, I want to build a virtual environment which only redirects the network calls (whereas a virtual machine takes far more resources, and has way more involved). Is this possible, and how might I approach this problem?
Thank you!
UPDATE: This is a more accurate idea of what I want to accomplish:
Basically, I want to program another application which TRANSPARENTLY redirects bind requests to available ports, and manages which applications are bound where... So from the applications perspective, all the instances are bound to port 1000, but in reality, this other application is automatically managing which instance is bound where, and avoiding potential conflicts. I feel like this could be accomplished with Windows Hooks, but I'm not sure how you could implement this?
As far as I know, there is no sane way to multiplex the same port on the same network device. At the very minimum, you will need to choose on of the following:
Run each instance of your program on a different port
Create multiple virtual network interfaces
The first choice is easy and may be the one I would choose. The second one is more towards what you are looking for but it would be a true PITA to set up - you can look into VirtualBox and its host-only networks for inspiration. If you are writing things on linux you might look into pipes and chrooting but you'll be spending more time setting up this environment than writing your software.

How to set up a BizTalk active/active cluster

I am setting up a virtual environment as a proof of concept with the following architecture:
2 node web farm
2 node SQL active/passive fail-over cluster
2 node BizTalk active/active cluster
The first two are straight forward, now I'm wondering about the BizTalk cluster.
If I followed the same model as setting up SQL (by using the Fail-over cluster manager in windows to create a cluster) I think I would end up with an active/passive cluster.
What makes a BizTalk cluster Active/Active?
Do I need to create a windows cluster first, or do I just install BizTalk on both machines and configure BizTalk appropriately?
Yes, my understanding is that you do need to cluster the OS first.
That said, you can usually avoid the need for clustering unless you need to cluster one of the 'pull' receive handlers like FTP, MSMQ, SAP etc. For everything else IMO it usually makes sense just to add multiple BizTalk servers in a group, and then use NLB for e.g. WCF Receive adapters.
The Rationale is that by running multiple host instances of each 'type' (e.g. 2+ Receive, 2+ Process, 2+ Send, etc), is that you also have the ability to stop and start host instances without any downtime, e.g. for maintenance (patches), application deployment, etc.
The one caveat with the Group approach is that SSO master doesn't failover automatically, although this isn't usually a problem as the other servers will still be able to work from cache.
You can configure a BizTalk Group in multi-computer environment. You can refer to the doc available at MSDN download center for more details. The document specifically has a section titled "Considerations for clustering BizTalk Server in a Multiple Server environment"
You can also additionally configure your BizTalk host as a clustered resource. You can refer to the documentation available at MSDN for more details.

How to use nServiceBus in a failover cluster

We're using nServiceBus in our development environment, where we have a frontend publishing messages to a service (subscriber). Life is good.
FrontendWebServer -> MiddlewareServer
In our production environment, we'll be running two frontends and two middleware servers for failover.
FrontendWebServer -> LoadBalancer(F5) -> MiddlewareServer
FrontendWebServer -> LoadBalancer(F5) -> MiddlewareServer
This works well for URLs, but because we need to use machine names for MSMQ we're stuck.
We don't want to specify a physical middleware machine name in each frontend config (because it makes managing configs harder, and if one middleware server goes down, it will also stop messages its particular frontend).
We tried to use the nServiceBus distributor (installed on each frontend), but it seems that a subscriber can only listen to one distributor.
Any ideas how we can get around this problem without using separate configs?
I would push the F5 up in front of the web servers to balance that load. For the cluster, just reference the Clustered Server name and services not the individual machines. For example if you have Node1 and Node2 you may call the Cluster NSBNode or something like that.
If you make that cluster the Distributor, you can then add multiple Worker nodes behind it to load balance further. Again in this case also make reference to the Cluster queue names(queue#ClusterName).

Resources