currently I run a nginx on a vps and I want to install k3s. The vps has two public reachable IP addresses and I want that the nginx on the vps itself only react to one specific of these two addresses.
Where can I realize that I can run the internal nginx besides the k3s?
You can do that with NodePort. You can create Nginx Service in K3S of the NodePort type.
Node port will expose your service to host on specific port.
References:
Kubernetes docs: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Rancher docs: https://rancher.com/docs/rancher/v2.x/en/v1.6-migration/expose-services/
Related
I try to deploy nginx deployment to see if my cluster working properly on basic k8s installed on VPS (kubeadm, ubuntu 22.04, kubernetes 1.24, containerd runtime)
I successfully deployed metallb via helm on this VPS and assigned public IP of VPS to the
using CRD: apiVersion: metallb.io/v1beta1 kind: IPAddressPool
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx LoadBalancer 10.106.57.195 145.181.xx.xx 80:31463/TCP
my target is to send a request to my public IP of VPS to 145.181.xx.xx and get nginx test page of nginx.
the problem is that I am getting timeout, and connection refused when I try to reach this IP address outside the cluster, inside the cluster -everything is working correctly - it means that calling 145.181.xx.xx inside cluster returns Test page of nginx.
There is no firewall issue - I tried to setup simple nginx without kubernetes with systemctl and I was able to reach port 80 on 145.181.xx.xx.
any suggestions and ideas what can be the problem or how I can try to debug it?
I'm facing the same issue.
Kubernetes cluster is deployed with Kubespray over 3 master and 5 worker nodes. MetalLB is deployed with Helm, IPAddressPool and L2Advertisement are configured. And I'm also deploying simple nginx pod and a service to check of MetalLB is working.
MetalLB assigns first IP from the pool to nginx service and I'm able to curl nginx default page from any node in the cluster. However, if I try to access this IP address from outside of the cluster, I'm getting timeouts.
But here is the fun part. When I modify nginx manifest (rename deployment and service) and deploy it in the cluster (so 2 nginx pods and services are present), MetalLB assigns another IP from the pool to the second nginx service and I'm able to access this second IP address from outside the cluster.
Unfortunately, I don't have an explanation or a solution to this issue, but I'm investigating it.
We have an application that is deployed to a Kubernetes cluster on a baremetal system. I have exposed the service as NodePort. We need to expose the service to the outside world using a domain name myapp.example.com. We have created the necessary DNS mapping and we have configured our VIP in our Bigip Loadbalancer. I would like to know what ingress solution we need to implement? Is it from the Nginx/Kubernetes or the Bigip controller? Will Nginx/Kubernetes Nginx controller support Bigip and how do we need to expose the ingress-nginx? is it type LB or Nodeport?
I haven't used Bigip that much but I found that they have a controller for kubernetes.
But I think the simplest way if you have Bigip Loadbalancer already setup and a k8s cluster running then just create the NodePort service for the pod that you want to expose and get the node port number of that service (lets assume 30001). This port is now open and can be used to communicate to the service inside the K8s using the Node's IP. Now configure the Bigip Loadbalancer pool to forward all the incoming traffic to < Node's IP >:30001.
All this is theory from what I know about k8s and how it works. Give it a try and let me know if it works.
I have a VM running on GCP and got my docker installed on it. I have NGINX web server running on it with a static reserved external/public IP address. I can easily access this site by the public IP address. Now, I have my Artifactory running on this VM as a Docker container and the whole idea is to access this Docker container (Artifactory to be precise) using the same public IP address with a specific port, say 8081. I have configured the reverse proxy in the NGINX web server to bypass the request to the internal IP address of my docker container of Artifactory but the request is not reaching to it and cannot access the Artifactory.
Docker container is running:-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a4119d923hd8 docker.bintray.io/jfrog/artifactory-pro:latest "/entrypoint-artifac…" 57 minutes ago Up 57 minutes 0.0.0.0:8081->8081/tcp my-app-dev-artifactory-pro
Here are my reverse proxy settings:-
server {
listen 81;
listen [::]:81;
server_name [My External Public IP Address];
location / {
proxy_pass https://localhost:8081;
}
}
Since you are using GCP to run this, I think that your issue is very simple. First, you do not have to have an Nginx in order to get to Artifactory inside a Docker container. You should be able to reach it very easily using the IP and port (for example XX.XX.XX.XX:8081) and I can see that in the Nginx configuration you are listening to port 81 which is not in use by Artifactory. I think that the issue here is either you did not allow HTTP communication to your GCP instance in the instance configuration, or you did not map the port in the "docker run" command.
You can see if the port is mapped by running the command "docker ps" and see if in the "PORTS" section there are ports that are mapped. If not, you will need to map the port (8081 to 8081) and make sure you GCP instance have HTTP traffic enabled, then you will be able to get to Artifactory with IP:PORT.
I used terraform to deploy my k8s cluster, and i used kubectl to deploy nginx on my worker nodes. Again using kubectl and creating a LoadBalancer targeting the nginx deployment on port 80 worked perfectly fine. I wanted to test out using an ALB, rather than an ELB.
I deleted the ELB, and then used the EC2 interface to setup a target group.
The target group uses port 80, is on the same vpc, and is targeting the two worker nodes.
Next I created an ALB, which is internet facing uses the same security group as the nodes, and again is on the same VPC. Its listening on port 80 and forwarding traffic to my target group.
I cant access nginx using the DSN name. I'm pretty sure it has to do with my port configuration?
Kubernetes does not natively support alb's.
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
We have a dockerized app that we imported on a Compute Engine instance with ubuntu 16.04.
It contains a nginx reverse proxy running on port 80 and in /etc/hosts we've added 127.0.0.1 mydockerizedapp
The GCE got an external IP address.
How can I set so that when I go on this external IP from a browser, I see the files served by the container nginx ?
You have to expose the ports of your container on the host machine by mapping it.
If you use the cli: --port-mappings=80:80:TCP