Symfony not trusting CloudFront/ALB proxy - symfony

When I call $request->getClientIp(); I'm getting an AWS IP address. My app is behind CloudFront & an ALB.
I've set framework.trusted_proxies to '127.0.0.1,REMOTE_ADDR' as per https://symfony.com/doc/current/deployment/proxies.html#but-what-if-the-ip-of-my-reverse-proxy-changes-constantly
App is running on Fargate (ECS)
Where am I going wrong?

Related

Getting client original ip address with azure aks

I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.

K8s Inter-service communication via FQDN

I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.

How to send requests to a web server running inside a docker container inside an AWS EC2 machine from the outside world?

I have a Python Flask web server running inside a docker container that is running in an AWS EC2 Ubuntu machine. The container is running on a default network setting (docker0). Within the host EC2, I can send requests (Get, Post) to this web server using docker-machine ip (172.x.x.x) and the forwarded ports (3000: 3000) of the host.
url: http:// 172.x.x.x:3000 / <api address>
How can I send requests (GET, POST) to this web server from the outside world? For example from another web server running in another EC2 machine. Or even from the web using my web browser?
Do I need to get a public IP Address for my docker host?
Is there is another way to interact with such web server within another web server running in another EC2?
If you have a solution please explain with as many details as you can for me to understand it.
The only way that I can think of is to write a web server on the main EC2 that listens to the requests and forward them to the appropriate docker container webservers?! But that would be too many redundant codes and I would rather just request to the web server running on the container directly!
The IP address of the docker is not public. Your EC2 instance usually has a public IP address though. You need an agent listening on a port on your EC2 instance and pass it to your docker/Flask server. Then you would be able to call it from outside using ec2-instance-ip:agent-port.
It's still not a long-term solution as EC2 IPs change when they are stopped. You'd better use a load-balancer or an elastic IP if you want the ip/port to be reliable.
That's right, it makes a lot of redundant code and an extra failure point. That's why it's better to use Amazon's managed docker service (https://aws.amazon.com/ecs/). This way you just launch an EC2 instance which is a docker and has a public IP address. It still allows you to SSH into your EC2 instance and change stuff.

Deploying a meteor app on kubernetes using ssl

i am new to kubernetes,
just deployed a meteor app on kubenrnetes + gke -
the app is currently running not secure
on a certain IP address.
When coming to secure it and defining a host name for it to run on instead of the ip address ,
that's where i am getting confused...
Can anyone explain and maybe give an example
what exactly is needed(in pods, srv...)?
and where does nginx come into the story?

Amazon Elastic Load Balancer with IIS

I have and ASP.NET MVC application hosted under IIS on a EC2 Instance.
I can access the application without any problems through the EC2 DNS once I set the proper binding in IIS
http - EC2 DNS - port 80
But if I add an Elastic Load Balancer and then I try to access that web application through the Load Balancer DNS the only way I can get it working is by adding an empty binding in IIS
"empty host name for http:80"
But this can't be ok.
If I don't add this the ELB sees my instance as unhealthy and when I access the ELB DNS I just get a HTTP 503 Service Unavailable.
The EC2 instance is in a Auto Scaling group.
I've tried modifying the security group of that instance from allowing all IPs for HTTP:80 to only allowing the Load Balancer Ip (amazon-elb/amazon-elb-sg)
Any ideas what I'm doing wrong?
Thanks
I am running several IIS servers behind ELB. Here are things that you need to ensure:
The ELB security group is allowed to accept port 80 traffic from anywhere (0.0.0.0/0)
The ELB security group is allowed to send outbound port 80 traffic to your EC2 instance where IIS is running. This point was valid for the ELBs that are set inside VPC. Hence please ignore this.
The EC2 security group of the EC2 instance where you have IIS running, should be allowed to accept port 80 traffic from the Load Balancer.
If this whole set-up is in VPC then there are few other things you need to check. so let us know if this is the case
No configuration changes on IIS are needed for sure.

Resources