OpenStack Swift is there a module to redirect client by region location? - cdn

I am currently playing with OpenStack Swift, my expectation is to deploy a multi region cluster. For example one node of the swift cluster will be deployed in the US and one in EU.
Is there a module or an option in swift-proxy to redirect client by region location?
If it is not possible, what other solutions do you suggest? Should I develop my own proxy server that redirects client to the nearest node (with geoloc/maxmind etc.)?
Resources:
Configuring a multi-region cluster
Proxy server configuration
EDIT: One of the contributor to Openstack answered me the code for geographically-distributed Swift clusters does not yet exist in the Git repository. The link I have posted in the resources is a bunch of proposed changes. There is no code in Swift to do
that sort of redirection. I will need to write a piece of WSGI
middleware and stick it in the proxy server's middleware pipeline.

Not exactly an answer to your needs, but as you know openstack has a side project keystone, in which endpoints are stored with Region information. If you want to write your own implementation that can be a starting point. Also since their a cdn tag in your quest there is a project named sos, making openstack swift work as a cdn server. Hope these can help you on your implementation.

Related

World Wide Web Publishing Service and HTTP.SYS

I was reading this article from Microsoft and in step 5 it says: The WWW Service uses the configuration information to configure HTTP.sys.
What exactly is the WWW service configure in HTTP.sys?
What is the purpose of the WWW service?
How is it different from the Windows Process Activation Service (WAS)?
Thank you!
In short, WWW service gets the configuration elements from applicationHost.config and applies the portion related to Windows HTTP API to the driver HTTP.sys.
The purpose of WWW service is roughly documented in https://learn.microsoft.com/en-us/iis/get-started/introduction-to-iis/introduction-to-iis-architecture#how-the-www-service-works-in-iis
Don't try to acquire a deep understanding of such components at the beginning. They are not open sourced so he documentation is rather vague.
The same applies to WAS, https://learn.microsoft.com/en-us/iis/get-started/introduction-to-iis/introduction-to-iis-architecture#windows-process-activation-service-was
If you are taking a course, just memorize the facts at this stage. Once you get more familiar with IIS daily operations, you will get more insights.

WSO2 ESB 4.8.1 Clustering

Is it possible to create one ESB node as a dual role as a worker and manager ?
I'm using wso2 ESB 4.8.1 and nginx as load balancer.
This is pretty easy. This is what you have to do.
Forget about nginx and setup the ESB cluster. Lets say a cluster with one manager and one worker. I think you will be able to get it done by following the instructions here. Instead of WSO2 ELB mentioned in the doc, you are going to use nginx. Instead of the ELB, You can set the management and worker node as the well known members. i.e. In both nodes, you set both nodes as the well known members.
Once you have the cluster working, you should be able to send requests to an artifact deployed to both nodes separately. Difference between the manager node and worker node is, manager node is the one who only commits to the svn repo. So, when you deploy new artifacts you should deploy them using the manager node.
Now you have to configure two sites in nginx. Lets assume you decided to use esbmgt.mydomain.com for the management node and esb.mydomain.com for the worker. In esbmgt's upstream, you only mention about the manager node and also you route the requests to the 9443 port of the node. In the esb's upstream, you mention both nodes and the requests are routed to 8280 (http) and 8243 (https). Thats because the ESB serves requests using those ports and the UI is exposed via 9443 (https)
I hope the above information will help you.

Running Kubernetes on vCenter

So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.
Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.
I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel.
My basic understanding is:
Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?)
Remove/disable the MASQUERADE rule installed by Docker.
Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs.
Some other setup is required to make use of load balanced Services and dynamic DNS.
Provision 5 VMs: 1 master, 4 minions
Install/configure Docker on all 5 VMs
Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons
Install/configure kubelet and kube-proxy on each minion and run them as services/daemons
This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.
I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.
How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?
I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.
For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful.
On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model .
From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection.
See also Setting up the network for Kubernetes for a nice answer.
Comments on your other points:
1.
All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
For anyone who is interested in doing the same, here is my current plan.
I found the kube-up.sh script which installs a production-ish quality Kubernetes cluster on your AWS account. Essentially it creates 1 Kubernetes master EC2 instance and 4 minion instances.
On the master it installs etcd, apiserver, controller manager, and the scheduler. On the minions it installs kubelet and kube-proxy. It also creates an auto-scaling group for the minions (nice), and creates a whole slew of security- and networking-centric things on AWS for you. If you run the script and it fails creating the AWS S3 bucket, create a bucket of the same exact name manually and then re-run the script.
When the script is finished you will have Kubernetes up and running and ready for near-production usage (I keep saying "near" and "production-ish" because I'm too new to Kubernetes to know what actually constitutes a real deal productionalized cluster). You will need the AWS CLI installed and configured with a user that has full admin access to your AWS account (it goes ahead and creates IAM roles, etc.).
My game plan will be to:
Get comfortable working with Kubernetes on AWS
Keep hounding the Kubernetes team on Slack to help me understand how Kubernetes works under the hood
Reverse engineer the kube-up.sh script so that I can get Kubernetes running on premise (vCenter)
Blog about this process
Update this answer with a link to said blog.
Give me some time and I'll follow through.

For an app hosted on meteor.com, would it be reasonable to use a proxy to add SSL and a custom domain?

First, let me explain why. I've had some rough luck with third party meteor hosting providers. But I'd really rather not run my own servers (I have a meteor app running with SSL on digital ocean, so I know how to do that, I just would rather dedicated professionals run as much of my infrastructure as possible). From what I can see, meteor.com hosting is wonderful, with the caveat of not being able to have a custom domain with ssl.
So, would it make sense to put up an nginx server that just proxied https://example.com to https://example.meteor.com? For starters, would that work, and if it did, would it be performant?
For your info, Meteor has a roadmap to roll out Galaxy (managed "meteor deploy" to your own servers) in list Under consideration for 1.1+. And it should be a perfect choice for you. Here is their Trello
This is MDG's commercial product -- a managed cloud platform for
deploying Meteor apps. You have control of the underlying hardware
(you own the servers or the EC2 instances, and Galaxy manages them for
you).
General Availability for Galaxy will be sometime after 1.0, since we
want to focus on Meteor 1.0 and get it out as quickly as possible.
So in the mean time if you just care about using your own domain, you can use something like Domain name forwarding which lets you automatically direct your domain name's visitors to a different website. And Masking prevents visitors from seeing your domain name forwarding by keeping your domain name in the Web browser's address bar.
Also in your case, you don't necessarily need to add SSL as Meteor has already got one when you deploy your apps. Just try input the url in your browser with https://yourappnamehere.meteor.com and you can see a SSL certificate is already in place.

NFV on OpenStack

I am fairly new to the NFV+SDN. I have downloaded the OpenDayLight and OpenStack in one Fedora 20 VM. I have mininet network as underlying physical topology in a separate VM. I want to run services like VPN, L3 routing and NAT, Loadbalancing etc on OpenStack, but I don't have a very clear image on how to start. As far as I have understood I have to run these services on OpenStack nodes (through VM instances) and route the traffic through mininet topology with OpenDayLight as the controller in the middle.
My confusions are:
How to start writing the applications (Firewall, VPN, NAT, etc) on OpenStack?
Do I have to write a code for such services or is it command line configuration?
I came across Neutron API, Is that of any help?
Came across this: http://docs.openstack.org/api/openstack-network/2.0/content/API_extensions.html
I have looked at the other questions regarding writing "Hello World" on OpenStack but could not find anything. I shall be grateful to you for any information that could get me started on this project.
I would suggest you to check OpenBaton.
Nowadays I'm working with it which can be used NFV MANO. In addition it's ETSI compliant and their solutions are easy to implement and configure.
For your confusions- You do NOT need to write code explicitly for Firewall / VPN / LB. You need to configure the Openstack Neutron to allow these services directly. The code is already present. You need to configure them to use them. For NAT there is L3 agent already running in the default setup ( al least via packstack )
Neutron API is of any use??? I assume you are refering to REST API and NOT CLI.
Well everything that you do on Dashboard is actualy represented as a REST API to Neutron Server ( not just Neutron but all the other components of Openstack ). All the components of Openstack ( Neutron, Nova, Glance, Keystone, etc ) interact via REST API with each other and RPC mechanism within each component. All the clicks on the Dashboard are actually thrown as a REST API call to the component servers!

Resources