Defining Apigee Target Servers hosts (the right way) in an HA Architecture - apigee

I'm on 14.04 On-Prem
I have an Active and DR setup
see here: http://www.slideshare.net/michaelgeiser/apigee-dc-failover
When I failover to the DR site, I update my DNS entry (at Akamai)
Virtual hosts work fine; Target Servers are giving me a headache
How can I setup and work with the Target Servers so I do not have to modify the Proxy API bundle but have traffic flow to the right VIP based on the DC?
I prefer not to do something like MyService-Target-dc1 and MyService-Target-dc2 and use the deploy script to modify the target name in the bundle.
I do not want to have a JavaScript that is modifies the target or anything else in the Proxy API, I need to define this in Environment setup.
I also cannot put the two DCs each into a separate Org; I need to use the same API Key when I move between the Active and DR sites; different Orgs mean different API Keys (right?).
TIA

One option is to modify the DNS lookup on each set of MP per DC so that a name like 'myservice.target.dc' resolves to different VIP. You'll of course want to document this well
since this, especially since this is external to the Apigee product.
I know you weren't too keen on modifying the target, but if you were open to that option, you could try using the host header of an ELB in front (if you have one) or client IP address (e.g., in geo-based routing) to identify which DC a call is passing through. From there, you can modify the target URL.
And yes, different Orgs do mean different API keys.

You can try Named Target Servers. They are part of the Load Balancing function but you can set them up individually and have different targets for different environments See:
Load Balancing Across Backend Servers
http://apigee.com/docs/api-services/content/load-balancing-across-backend-servers
Create a Named Target Server
http://apigee.com/docs/management/apis/post/organizations/%7Borg_name%7D/environments/%7Benv_name%7D/targetservers

Related

Is it possible to create a load balancer in gcp that works without instance groups in a regional scope?

the problem is I have 3 virtual machines with the same source images in three diferent zones in a region. I can't put them in a MIG because each of them has to attach to a specific persistent disk and according that I researched, I have no control of which VM in the MIG will be attached to which persistent disk (please correct me if I´m wrong). I explored the unmanaged instance group option too, but only has zonal scope. Is there any way to create a load balancer that works with my VMs or I have to create another solution (ex. NGINX)?
You have two options.
Create an Unmanaged Instance Group. This allows you to set up your VM instances as you want. You can create multiple instance groups for each region.
Creating groups of unmanaged instances
Use a Network Endpoint Group. This supports specifying backends based upon the Compute Engine VM instance's internal IP address. You can specify other methods such as an Internet address.
Network endpoint groups overview
Solution in my case was using a dedicated vm, intalling nginx as a load balancer and creating static ips to each vm in the group. I couldn't implement managed or unmanaged instance groups and managed load balancer.
That worked fine but another solution found in quicklabs was adding all intances in a "instance pool", maybe in the future I will implement that solution.

WSO2 clustering in a distributed deployment

I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)

Fxed world-wide IPs for testing country detection?

I am integrating my application with some country-detection module.
Overall logic is:
detect client IP
lookup IP in Geo IP DB (for simplicity let's
assume it is MaxMind DB)
identify country code based on IP
Some other business logic based off that country code
That works very well, but I am having problems writing automated integration tests.
There is a way to override (force) certain test IPs, but the problem is that all IPs are periodically changing, and time to time I am getting my tests failed because of that.
Any ideas how to stabilize such tests?
One thought I had is if there was a directory of main ISPs across the world, but could not find it.
Thanks!
You can override the GeoIP lookup service in your test environment to return fixed values for certain test IPs. But this only tests #4. To test #1,2,3, you need to use real IP addresses.
If your needs are limited to only a few countries, you can purchase private proxies with fixed IPs and run your tests through a proxied client with a fixed IP.
To automate at scale, you can look at https://www.geoscreenshot.com, it provides an API for visually testing many locations simultaneously.
You might need to use VPN to test applications. Some providers offer free servers in fixed countries.
If it is a web application, you can use http://www.locabrowser.com to test page view from different servers in multiple countries.
Also, you can use the CSV file of IPs that MaxMind use for their unit tests (see https://stackoverflow.com/a/23616234/354709)

How can I set up a Docker network with restricted communication?

I'm trying to create something like this:
The server containers each have port 8080 exposed, and accept requests from the client, but crucially, they are not allowed to communicate with each other.
The problem here is that the server containers are launched after the client container, so I can't pass container link flags to the client like I used to, since the containers it's supposed to link to don't exist yet.
I've been looking at the newer Docker networking stuff, but I can't use a bridge because I don't want server cross-communication to be possible. It also seems to me like one bridge per server doesn't scale well, and would be difficult to manage within the client container.
Is there some kind of switch-like docker construct that can do this?
It seems like you will need to create multiple bridge networks, one per container. To simplify that, you may want to use docker-compose to specify how the networks and containers should be provisioned, and have the docker-compose tool wire it all up correctly.
Resources:
https://docs.docker.com/engine/userguide/networking/dockernetworks/
https://docs.docker.com/compose/
https://docs.docker.com/compose/compose-file/#version-2
One more side note: I think that exposed ports are accessible to all networks. If that's right, you may be able to set all of the server networking to none and rely on the exposed ports to reach the servers.
Hope this is relevant to your use-case - I'm attempting to draw context regards your actual application from the diagram and comments. I'd recommend you go the Service Discovery route. It may involve a little bit of simple API over a central store (say Redis, or SkyDNS), but would make things simple in the long run.
Kubernetes, for instance, uses SkyDNS to do so with DNS. At the end of the day, any orchestration tool of your choice would most likely do something like this out of the box: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
The idea is simple:
Use a DNS container that keeps entries of newly spawned servers
Allow the Client Container to query it for a list of servers. e.g. Picture a DNS response with a bunch of server-<<ISO Timestamp of Server Creation>>s
Disallow client containers read-access to this DNS (how to manage this permission-configuration without indirection, i.e. without proxying through an endpoint that allows writing into the DNS Container, but not reading, is going to exotic)
Bonus Edit: I just realised you can use a simpler Redis-like setup to do this, and that DNS might just be overengineering :)

Store and forward HTTP requests with retries?

Twilio and other HTTP-driven web services have the concept of a fallback URL, where the web services sends a GET or POST to a URL of your choice if the main URL times out or otherwise fails. In the case of Twilio, they will not retry the request if the fallback URL also fails. I'd like the fallback URL to be hosted on a separate machine so that the error doesn't get lost in the ether if the primary server is down or unreachable.
I'd like some way for the secondary to:
Store requests to the fallback URL
Replay the requests to a slightly different URL on the primary server
Retry #2 until success, then delete the request from the queue/database
Is there some existing piece of software that can do this? I can build something myself if need be, I just figured this would be something someone would have already done. I'm not familiar enough with HTTP and the surrounding tools (proxies, reverse proxies, etc.) to know the right buzzword to search for.
There are couple of possibilities.
One option is to use Common Address Redundancy Protocol or carp. Brief description from the man page follows.
"carp allows multiple hosts on the same local network to share a set of IP addresses. Its primary purpose is to ensure that these addresses are always available, but in some configurations carp can also provide load balancing functionality."
It should be possible be configure IP balancing such that when a primary or master http service fails, the secondary or backup http service becomes the master. carp is host oriented as opposed to application service. So when the http service goes down, it should also take down the network interface for carp to do its thing. This means you would need more than one IP address in order to log into the machine and do maintenance. You would need a script to do the follow-up actions once the original service comes back online.
The second option is to use nginx. This is probably better suited to what you are trying to do.
Many years ago I needed something similar to what are trying to do and I ended up hacking together something that did it. Essentially it was a switch. When 'A' fails, switch over to 'B'. Re-sync process was to take time stamped logs from 'B' and play them back to 'A', once 'A' was back online.

Resources