I'm using eks, aws-rds, binami/nginx (not bitnami/nginx-ingress-controller) with AWS NLB, my aws-rds is private access only, eks server can access it. one ec2 in another aws account want to access the private rds throuth nginx server (NLB).
ec2(vpc2) -> nginx (eks/vpc1) -> rds(vpc1)
How should i configure the bitnami/nginx Chart values.yaml? I have other solutions, but this question want to discuss bitnami/nginx solution.
Related
I'm planning to create a new service for handling user image uploads. Although I know that the best practice is to use some external cloud storage dedicated for hosting static files (e.g. Amazon S3), currently I'm tight on the budget and try to store the files in the same shared hosting that also hosts my website. This host already has an Nginx reverse-proxy that directs the request to the appropriate service.
In the traditional non-containerized application, the backend service and Nginx reverse-proxy are in the host and can access the same filesystem. Backend service would just store the uploads, and the Nginx serves the static files directly. But how should it be approached in a containerized way? Currently, I could think of two approaches:
Create a container containing both the backend service and Nginx. Nginx will reverse proxy between the service's API and the static image files.
Create a container containing only the backend service. The Nginx reverse-proxy container will share a volume with the backend service so it can access the files.
This is how the traffic would be directed if I use those methods above.
Upload: Client -> Nginx reverse-proxy -> Nginx reverse-proxy #2 -> backend -> store static files
Download: Client -> Nginx reverse-proxy -> Nginx reverse-proxy #2 -> get static files
Upload: Client -> Nginx reverse proxy -> backend -> store static files in shared volume
Download: Client -> Nginx reverse proxy -> get static files
I'm quite new with containerized architecture. Is there a better architecture or more efficient way for file uploading case?
On my server I run some applications directly on the host. In parallel I have a single-node K3S that also contains a few applications. To be able to manage the traffic routing and HTTPS certificates to the individual services in a central place I want to use Nginx. In the cluster runs a traefik ingress controller which I use for the routing in this context.
To be able to reverse proxy to each application, no matter if it runs directly on the host or in a container in K3S, Nginx must be able to reach the applications locally, no matter where it runs (without the traffic leaving the server). E.g. proxy myservice.mydomain.com to localhost:8080 from Nginx should end up on the webserver of a nativly running application and myservice2.mydomain.com to the webserver of a container in K3S.
Now, is this possible if the Nginx runs in the K3S cluster or do I have to install it directly on the host machine?
If you want to use Nginx that way yes you can do it.
keeping Nginx in front of Host and K3S also.
You can expose your service as NodePort from K3s and while local servie that you will be running on Host machine will be also running on one Port.
in this Nginx will forward the traffic like
Nginx -> Port-(8080) MachineIp: 8080 -> Application on K3s
|
Port-(3000) MachineIp: 3000 -> Application running on Host
Example : https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
I have 3 containers on ECS: web, api and nginx. Basically nginx is proxying traffic to web and api containers:
upstream web {
server web-container:3000;
}
upstream api {
server api-container:3001;
}
But every time I redeploy web or api they change their IPs so I need to redeploy nginx afterwards in order to make it to "pick up" new IPs.
Is there a way to avoid this so I could just update let's say api service and nginx service would automatically proxy to correct IP address?
I assume these containers belong to 3 different task definitions and ultimately 3 different tasks (or better 3 different services).
If that is the setup then you want to use service discovery for this. This only works with ECS services and the idea is that you create 3 distinct services each with 1+ tasks in it. You give the service a name (e.g. nginx, web, api) and each container in them is going to be able to resolve the other containers by pointing to the fqdn (e.g. api.local). When your container in the nginx service tries to connect to api.local service discovery will resolve that name to the IP of one of the tasks in the ECS service api.
If you want to see an example re how this is setup you can look at this demo app and particularly at this CloudFormation template
I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.
I have a Python Flask web server running inside a docker container that is running in an AWS EC2 Ubuntu machine. The container is running on a default network setting (docker0). Within the host EC2, I can send requests (Get, Post) to this web server using docker-machine ip (172.x.x.x) and the forwarded ports (3000: 3000) of the host.
url: http:// 172.x.x.x:3000 / <api address>
How can I send requests (GET, POST) to this web server from the outside world? For example from another web server running in another EC2 machine. Or even from the web using my web browser?
Do I need to get a public IP Address for my docker host?
Is there is another way to interact with such web server within another web server running in another EC2?
If you have a solution please explain with as many details as you can for me to understand it.
The only way that I can think of is to write a web server on the main EC2 that listens to the requests and forward them to the appropriate docker container webservers?! But that would be too many redundant codes and I would rather just request to the web server running on the container directly!
The IP address of the docker is not public. Your EC2 instance usually has a public IP address though. You need an agent listening on a port on your EC2 instance and pass it to your docker/Flask server. Then you would be able to call it from outside using ec2-instance-ip:agent-port.
It's still not a long-term solution as EC2 IPs change when they are stopped. You'd better use a load-balancer or an elastic IP if you want the ip/port to be reliable.
That's right, it makes a lot of redundant code and an extra failure point. That's why it's better to use Amazon's managed docker service (https://aws.amazon.com/ecs/). This way you just launch an EC2 instance which is a docker and has a public IP address. It still allows you to SSH into your EC2 instance and change stuff.