How would I setup Organizations and Environments with this use case? - apigee

I'm upgrading from 3.8 to the latest Apigee version, now called Apigee Edge, and setting up my DEV/TST instance
In my 3.8 install, I have a single non-Production instance supports 12 total development and test environments
• There are currently 6 DEV (DEV01, DEV02,…DEV06) and 6 TST (TST01, TST02,…TST06) instances to support current versions and in-development versions
• Each instance has copy of each API Proxy (“epapi”, “ewsapi” and “Token Service”)
• Each instance has 3 virtual servers, one for each API Proxy
I had these in one domain in 3.8. I named the 12 deployed applications epapi_TST01, epapi_TST02,… epapi_TST06 and epapi_DEV01, epapi_DEV02,…epapi_DEV06
What is the best Organization and Environment strategy to implement this in Apigee Edge?

I would create 2 organizations, dev and tst to keep them logically separated. Inside each organization, you can have multiple environments. Example:
Organization: dev
Environments: dev1, dev2, dev3
Organization: tst
Environments: tst1, tst2, tst3
There are multiple ways you can approach the configuration of the environments to ensure proxies can be deployed to any of the environments interchangeably. Information below describes 2 ways you can leverage virtual host configurations.
NOTE: The below configurations are not self-service in the free offering of Apigee Edge and not likely to be changed away from the default offerings described here: http://apigee.com/docs/api-services/content/virtual-hosts. This information mostly pertains to on-premise deployments of Apigee Edge or paid accounts.
Virtual hosts are scoped at the environment level. This means you can have the same virtual host named default in different named environments. However, the configuration for the virtual host in each environment would have a different port and/or host alias configured. This will allow you to leave the ProxyEndpoint configuration the same for <BasePath> and <VirtualHost>.
Example of strategy using different virtual host ports:
dev1:
vhost name: default
vhost port: 8080
dev2:
vhost name: default
vhost port: 8081
dev3:
vhost name: default
vhost port: 8082
Notice the name is the same so your <VirtualHost> configuration in the Apigee API proxy bundle will remain the same when you deploy to any of the dev environments. However, you would make a request into each environment as such:
dev1: http://dev.api.example.com:8080/basepath/resource
dev2: http://dev.api.example.com:8081/basepath/resource
dev3: http://dev.api.example.com:8082/basepath/resource
The above strategy will ensure you can deploy the same proxy config to multiple environments without having any conflicts when activating the API proxy. The proxy will listen on a different port, depending on environment. Then, you can choose to do different things with load balancers in front of Apigee to abstract the use of different ports for accessing the environments.
A similar solution can be obtained by using hostAliases config on the virtual host as well, using the HTTP header Host determine which environment the request routes to instead of different ports.
Example virtual host config using hostAliases example:
dev1:
vhost name: default
vhost port: 80
hostAliases: ["dev1.api.example.com"]
dev2:
vhost name: default
vhost port: 80
hostAliases: ["dev2.api.example.com"]
dev3:
vhost name: default
vhost port: 80
hostAliases: ["dev3.api.example.com"]
When hostAliases are configured, the Host header value will be used to classify the request into the API proxy deployed in the proper environment. Example requests using this config:
dev1: http://dev1.api.example.com/basepath/resource
dev2: http://dev2.api.example.com/basepath/resource
dev3: http://dev3.api.example.com/basepath/resource

Related

Remote IP based SSL in Kubernetes Ingress

In plain nginx, I can use the nginx geo module to set a variable based on the remote address. I can use this variable in the ssl path to choose a different SSL certificate and key for different remote networks accessing the server. This is necessary because the different network environments have different CAs.
How can I reproduce this behavior in a Kubernetes nginx ingress? or even Istio?
You can customize the generated config both for the base and for each Ingress. I'm not familiar with the config you are describing but some mix of the various *-snippet configmap options (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-snippet) or a custom template (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/)

Meteor mup using Lets Encrypt with multiple domains only 1 ip

I have several sites running on different (virtual) ubuntu servers, something like this:
mysite.mydomain.com (10.3.0.5)
another.mydomain.com (10.3.0.7)
siteabc.differentdomain.eu (10.3.0.16)
I had paying certificates for all of them, and was using MUP (Meteor-up) to deploy them:
proxy: {
domains: 'mysite.mydomain.com',
ssl: {
crt: './mysite_mydomain_com.crt',
key: './mysite_mydomain_com.key',
forceSSL: true
}
}
Now I want to use Lets Encrypt for all of them. I forwarded port 80 to 10.3.0.5 (the first site), and this works (MUP creates nginx docker containers automatically etc..), but the others don't work because they need port 80 which is already used for the first one.
proxy: {
domains: 'mysite.mydomain.com',
ssl: {
letsEncryptEmail: 'mysite#mydomain.com'
forceSSL: true
}
}
Is it possible to have multiple domains behind the same ip, and still use Lets Encrypt? And how would I do that for Meteor applications and Meteor-up deployments?
Yes is the short answer. MUP installs a docker image running nginx proxy https://github.com/nginx-proxy/nginx-proxy
nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
See Automated Nginx Reverse Proxy for Docker for why you might want to use this.
You don't need to worry about the details, it automatically directs traffic based on the target URL to the correct docker instance. I run several staging/demo servers on the same EC2 instance using it, Easy :)

kubernetes (rancher) ingress understanding

what i know / have running:
i got a running rancher ha setup (2.4.2) on vsphere w/ a L4 nginx lb in front of it. access the ui and provision new clusters (vsphere node driver) works great. I know I'm not in the cloud and cannot use a L7 LB (apart from nip.ip or metal lb maybe), and deploying workloads and expose them via nodeport works great (so the workloads are available on the specified port on each node a according pod is running on).
my question:
is it possible to expose (maybe via ingress) applications on any of my running cluster under the domain/adress I can access the rancher ui (in my case: https://rancher-things.local)? like have external (local network, not public) if I would deploy maybe a harbor registry and can somehow expose it like https://rancherthings.local/harbor? or if this would not work is it possible to deploy a L4 load balancer for accessing applications on or in front of a specific cluster?
thank you.
There should be ingress resource already which exposes the rancher ui. You can edit the ingress and add a path /harbor to route the traffic to service for harbor.
paths:
- path: /harbor
backend:
serviceName: harbor
servicePort: 80
#arghya-sadhu, the LB is pointing to the HA cluster (a.k.a upstream/management/rke/ha cluster) running Rancher, not Harbor. It's not recommended to create any other ingresses in this HA cluster. Also, I think the harbor workload is running in a downstream cluster and there is no LB pointing to the nodes of this cluster.
Patrick, you can create a service exposing your application port via http and use Rancher's proxy mechanism to access the UI of your app via the Rancher URL. If you have monitoring enabled in your setup, you can follow how Grafana UI is exposed via this mechanism.
After creating the service, you can find the URL info using the following command:
kubectl -n <your_app_namespace> cluster-info
# or
kubectl cluster-info -A
The downside of this approach is you don't have a dedicated LoadBalancer handling the traffic, but for smaller scale setup, this should be ok.
Example URL of grafana:
https://<rancher-fqdn>/k8s/clusters/<cluster-id>/api/v1/namespaces/cattle-prometheus/services/http:access-grafana:80

What is kube-nginx and how does it differ from kube-proxy?

what is the difference between kube-nginx (here i am not talking about nginx ingress controller) and kube-proxy ?
i've seen a recent deployment that all nodes in the cluster are running 1 kube-proxy (which is used for accessing services running in the nodes according to https://kubernetes.io/docs/concepts/cluster-administration/proxies/) and 1 kube-nginx pod, so they are used for different purposes.
As mentioned by community above and here
K8s components require a loadbalancer to access the apiservers via a reverse proxy. Kubespray includes support for an nginx-based proxy that resides on each non-master Kubernetes node. This is referred to as localhost loadbalancing. It is less efficient than a dedicated load balancer because it creates extra health checks on the Kubernetes apiserver, but is more practical for scenarios where an external LB or virtual IP management is inconvenient. This option is configured by the variable loadbalancer_apiserver_localhost (defaults to True. Or False, if there is an external loadbalancer_apiserver defined). You may also define the port the local internal loadbalancer uses by changing, loadbalancer_apiserver_port. This defaults to the value of kube_apiserver_port. It is also important to note that Kubespray will only configure kubelet and kube-proxy on non-master nodes to use the local internal loadbalancer.

how to setup kubernetes frontend to serve multiple routes

I'm learning kubernetes and its eco-system, and even though I saw plenty of examples how to use nginx (as reverse proxy) to perform frontend duties, I struggle to understand how I can setup/configure it to serve multiple paths. For example if I have two backends (app1 and app2) deployed to kubernetes and I want them appear in my frontend as /app1 and /app2 I would assume that I need the following nginx configuration:
upstream app1 {
server app1
}
upstream app2 {
server app2
}
server {
listen 443;
location /app1 {
proxy_pass https://app1:PORT1;
# and additional configuration stuff
}
location /app2 {
proxy_pass https://app2:PORT2;
# and additional configuration stuff
}
}
With this site configuration I can build nginx image and deploy it as service into kubernetes. But for deployment I need to specify in yaml which app to match. And, if I want to match my frontend with my two backends how I should write its deployment spec? I followed example from here [1] but still can't understand how I can extend it to aforementioned use case.
[1] https://coderjourney.com/kubernetes-frontend-service-with-nginx/
Welcome to the kubernetes ecosystem!
Let me say the problem back to make sure that we're talking about the same thing:
You have 2 applications in a cluster. From a Kubernetes perspective these should consist of the following in-cluster resources:
a Deployment for each
the Deployment for each comprises and therefore also creates in the cluster a ReplicaSet and some Pods for each app.
the Pod spec for each app references the app's container image
a Service spec which uses labels to refer to the respective Deployment/ReplicaSet/Pod objects for each application. The name of the Service can be used in other Kubernetes objects
Given that footprint for those two applications, you then want to expose each application under its own route/path, as described above, under a single domain, so outside visitors can get to each independently.
If that's all accurate, then there is no need to deploy your own nginx container. You just use another Kubernetes object, called Ingress.
An Ingress is an abstract name for- if you are running your own Kubernetes cluster on bare metal or similar- what in concrete terms amounts to an specially configured nginx.
If you are running a managed Kubernetes on gcloud or azure or AWS, then Ingress in concrete terms is usually a load balancer provided by the cloud.
The documentation should help:
https://kubernetes.io/docs/concepts/services-networking/ingress/
More specifically, an Ingress Controller is a Kubernetes term for a piece of software in the cluster that watches for Ingress resources, then uses the details in those resources to produce new configuration.
An nginx Ingress Controller:
https://github.com/kubernetes/ingress-nginx/blob/master/README.md
will first create nginx pods in the cluster, then watch for changes to Ingress specs, will update the nginx configuration to match what's specified in the specs, and then restart the nginx pods when the configuration changes.
A cloud Ingress Controller works similarly, though it uses the cloud APIs to update cloud load balancer configuration.
So, to a first approximation, what you might want to do is just follow the Simple Fanout Ingress example, which exposes two applications- each behind their own Service, as described above- under two paths in a single domain:
https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout
Hope that helps.

Resources