What would be the recommended way to achieve static ip addres or DNS A record or alias for brokers that does not change on MSK cluster recreation? Considering how rare this is supposed to happen, putting an NLB in front of it seems like an overkill and it maybe even causes some issues?
Related
I have an EC2 instance with AWS and I have installed nginx and created multiple server blocks to server multiple applications.
However, if nginx goes down, all the applications go down as well.
Is there any way to setup seperate nginx instance for each application? So if one nginx instance goes down, it won't affect other instances.
Yes, its technically possible to install 2 nginx instances on the same server but I would do it another way.
1 - You could just create multiple EC2 instances. The downside of this approach is that maybe it's gets harder to maintain depending on how many instances you want.
2 - You could use Docker or any of its alternatives to create containers and solve this problem. You can create as many containers you need and totally separate nginx instances. Although docker is simple to learn and start using it in no time, the downside of this approach is that you need to put a little effort to learn it and your main EC2 instance needs to have enough resources to share between the containers.
I hope it helps!
If it's possible to use ELB instead of nginx. this will be more convenient but if ELB doesn't work for you. nginx already support High Availability mode to avoid the problem you mentioned of having a single point of failure
it's documented officially here
https://www.nginx.com/products/nginx/high-availability/
it's better than having one nginx machine for every application and grantee more availability
The type of redundancy that you’re looking for is usually provided by a load balancer or reverse proxy in practice. There’s a lot of ways this can be achieved architecturally, but generally speaking looks like this;
Run multiple nginx instances with the same server definitions, and a balancer like haproxy. This allows the balancer to check which nginx instances are online and send requests to each in turn. Then if a instance goes down, or the orchestrator is bring up a new one, requests only get sent to the online ones.
If requests need to be distributed more heavily, you could have nginx instances for each server, with a reverse proxy directed at each instance or node.
There may be some overhead for nginx if you do it that way and your nginx may be difficult to maintain later because there are many nginx instances. Ex. If you need to update or add some modules it will be harder.
How about if you try using EC2 autoscaling group even a minimum 1 and desired 1? So that it will automatically launch a new instance if the current one goes down.
If you need to preserve some settings like the elastic ip of your EC2, you can try to search for EC2 instance recovery. It will restore your setup unlike the autoscaling group.
But it would be better if you will use a loadbalancer like ALB and use 2 instances at a minimum. Using an ALB will also make you more secure. You may also want to read about ALB target groups. It will give you more options on how to solve your current problem.
Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?
Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.
Idea
Gradually use a few small-scale dedicated servers in combination with an expensive cloud platform, where - on little traffic - the dedicated servers should first filled up before the cloud kicks in. Hedging against occasional traffic spikes.
nginx
Is there an easy way (without nginx plus) to achieve a "waterfall like" set-up, where small servers should first be served up to a maximum number of concurrent connections, or better, current bandwidth before the cloud platform sees any traffic?
Nginx Config, Libraries, Tools?
Thanks
You will use nginx upstream module.
If you want total control, set your cloud servers with backup parameter, so that they won't be used until your primary servers fail. Then use custom monitoring scripts to determine when those could servers should kick-in, change nginx config and remove the backup keyword from them. Also monitor conditions when you want to stop using the cloud servers and alter the nginx config.
More simple solution (but without fine tuning like avoiding spikes) is to use the max_conns=number parameter. Nginx should start to use the backup server if all other already have max number of connections (I didn't test it).
NOTE: max_conns parameter was only available in paid nginx between v1.5.9 and v1.11.5, so the only solution with these versions is own monitoring + reloading of nginx config when needed to change the upstream servers. Thanks Mickaël Le Baillif's comment to point out this parameter is now available to all.
Background
I currently have multiple low power servers running on my LAN. They all run different services, but some of them are similar. I also own 3 domains and have many sub-domains for those domains.
So, what's your problem?
Well, as I said before, some of my service are REALLY similar, and run on the same port (I have an Owncloud server on one, and my website is hosted on another). This means that if I want owncloud.mydomain.com to go to my Owncloud server, and www.mydoamain.com to go to my web server, I have a little bit of an issue. Both sub-domains just go to my house and the services use the same port. I can't really separate the traffic per subdomain.
edit: It also needs to be able to direct many types of traffic like SSH, HTTPS, and FTP
Possible Solutions
I've though about just running the different service on different ports, but would not be optimal AT ALL. It means it's weird to look at, people will have a harder time using any of my services, and it will generally be something I do not like.
I've thought about similar services on the same server, but their are some pretty dinky servers. I'd rather not have to do anything like that at all. Also, since the servers are a little old, it's nice to know that if one of them dies, at least I'll have my other services. I don't think this option is good at all.
Best possible solution: I've heard that there's a service that has the exact functionality that I'm looking for called haproxy. My only issues with this is that I don't know how to use this service and I especially don't know How to get the use I want out of it.
My final question
I would love to get haproxy working, I just need to know how to set it up the way I need. If anyone has a link to a tutorial on how to do what I want specifically (I've already found out how to get haproxy working, just not the way I want) then I would be really grateful. I would look for this myself, but I already have, and I don't even know what to search for. Can anyone help me out?
Thank you
Make your own config file, say haproxy.cfg, containing something like the following
defaults
mode http
frontend my_web_frontend
bind 0.0.0.0:80
timeout client 86400000
acl is_owncloud hdr_end(host) -i owncloud.mydomain.com
acl is_webserver hdr_end(host) -i www.mydomain.com
use_backend owncloud if is_owncloud
use_backend webserver if is_webserver
backend owncloud
balance source
option forwardfor
option httpclose
timeout queue 500000
timeout server 500000
timeout connect 500000
server server1 10.0.0.25:5000 weight 1 maxconn 1024 check inter 10000
backend webserver
balance source
option forwardfor
option httpclose
timeout queue 500000
timeout server 500000
timeout connect 500000
server server1 10.0.0.30:80 weight 1 maxconn 1024 check inter 10000
And then run haproxy on one of your servers.
./haproxy -f ~/haproxy.cfg
Point all your domains and subdomain to this machine. They'll route according to the config.
You only need one ip address but you need to configure the virtual host correctly. This link provides step by step details for Ubuntu virtual host configuration. This is the easiest way and everyone else will agree it's the cheapest if you insist on using your personal network.
https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3
I need my app to be able access an third party API who limits access based on a single, static IP Address.
Due to the dynamic nature of the Heroku dynos and routing mesh, this is not possible - I'll need something with a fixed IP Address to act as a proxy.
An US East EC2 Linux/Nginx instance would seem the sensible choice, but these seems like a lot of work/maintenance for something pretty trivial. Does anyone know of any services out there that do this?
Ok so after a bit of research I've discovered the best way to do this currently is indeed with an AWS US East EC2 instance running some sort of proxy. I've gone with linux/nginx.
I've also learned there is a Heroku add-on currently in alpha stage of development that will handle exactly this requirement. If you'd like to test it, get in touch with Heroku support.
You can also use the Proximo add-on to get a static outbound IP address via proxy without any of the maintenance headaches.