Deploying a meteor app on kubernetes using ssl - meteor

i am new to kubernetes,
just deployed a meteor app on kubenrnetes + gke -
the app is currently running not secure
on a certain IP address.
When coming to secure it and defining a host name for it to run on instead of the ip address ,
that's where i am getting confused...
Can anyone explain and maybe give an example
what exactly is needed(in pods, srv...)?
and where does nginx come into the story?

Related

Symfony not trusting CloudFront/ALB proxy

When I call $request->getClientIp(); I'm getting an AWS IP address. My app is behind CloudFront & an ALB.
I've set framework.trusted_proxies to '127.0.0.1,REMOTE_ADDR' as per https://symfony.com/doc/current/deployment/proxies.html#but-what-if-the-ip-of-my-reverse-proxy-changes-constantly
App is running on Fargate (ECS)
Where am I going wrong?

What causes this and how to fix: Error code: SSL_ERROR_NO_CYPHER_OVERLAP

I'm migrating a bitnami wordpress site from AWS lightsail to GCP.
The AWS's setup includes a purchased wildcard SSL. When I set up the loadbalancer in GCP, I opt for Google's SSL instead.
I got this error Error code: SSL_ERROR_NO_CYPHER_OVERLAP when I access from the loadbalancer's IP. The VM is working fine and I am able to access it with it's own external IP.
The domain is still pointing to AWS's server. I wonder if the error is because I have not pointed the domain to the load balancer's IP?
I'm hoping to gain some clarity first before I update the domain's IP. I want to avoid situation where it does not work after I make the switch.
Thanks

Best practise for a website hosted on Kubernetes (DigitalOcean)

I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone

Make Laravel Homestead Accessible via the Internet

How can I make Laravel Homestead (a Vagrant vm) accessible via the internet? Currently, I have set my router to port-forward to my host machine's local IP. However, that causes the Laravel site to think that all incoming requests are coming from 10.0.2.2.
What would be the correct way to make the site accessible via the internet? Would I have to get the VM to be assigned an IP from the routers DHCP? If so, how do I do that?
The correct answer these days would be to use Homestead's share alias on the command line via ssh.
eg. share acme.app
Behind the scenes, this uses ngrok and is documented in the Laravel documentation.
You can make it work with xip.io service. More details here: http://christoph-rumpel.com/2014/10/access-laravel-homestead-projects-through-other-devices-in-three-little-steps/
Chances are you need to tell Laravel to trust the router as a proxy:
Request::setTrustedProxies([
'10.0.2.2',
]);
This will work if the router correctly sets X-Forwarded-For sort of headers.

IP address for deployed Meteor app (meteor.com)?

I have a meteor app I've deployed to meteor.com. It's up and running fine meteor deploy myapp.meteor.com, however I need to adjust firewall rules so servers can see it.
Does it have a static IP, and if so how to get it?
ping yourappname.meteor.com
Note that the IP is not guaranteed to be static, so it might change from time to time.

Resources