Kubernetes Blocked Mixed Active Content - http

Issue:
When working with K8's [Kubernetes] on development, I'm running into the issue where my Ingress/Nginx seems keep my client side (React) from pulling data from my API (Flask/Python).
Details:
The connection between the client and API is facilitated using an Environment Variable that we'll call API_URL for the sake of this post. API_URL is used so that the Client knows which API routes to GET and POST.
On Minikube with K8's in dev, the Minikube IP that is provided forces https from what I understand (or maybe it's ingress/nginx?). The API_URL environment variable value is value: api-cluster-ip-service. However, when I hit the dev site it's showing that this value gets assigned to http://localhost (not-https)
This causes: Blocked loading mixed active content “http://localhost/server/stuff". As a result, I can't pull anything from my API.
Question:
Is there a recommended approach for this? Perhaps a way to turn https off on dev (I don't even know if that's possible)? Or maybe I need a certificate for localhost? I'm fairly new to Kubernetes so any help is much appreciated!
Ingress-server.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: client-cluster-ip-service
servicePort: 3000
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
Ingress Namespace output
kubectl get ing --all-namespaces
default ingress-service * 10.0.2.15 80 4d21h

I found the cause of my problem...and the error message was fairly misleading. On a local environment, my Client side talks to my API via http://localhost/api/. However, I realized that because I was on Minikube, it's no longer on localhost (because Minikube has it's own IP). Once I changed my API_URL to http:// it began working immediately.
The only challenge here is that minikube, when stopped, changes the IP on each refresh, meaning I need to grab the IP and update the API_URL each time. However, that's a separate question/answer.
Summary:
Changed my API_URL from http://localhost to the Minikube IP. Began working immediately.

Related

externalTrafficPolicy Local on GKE service not working

I'm using GKE version 1.21.12-gke.1700 and I'm trying to configure externalTrafficPolicy to "local" on my nginx external load balancer (not ingress). After the change, nothing happens, and I still see the source as the internal IP for the kubernetes IP range instead of the client's IP.
This is my service's YAML:
apiVersion: v1
kind: Service
metadata:
name: nginx-ext
namespace: my-namespace
spec:
externalTrafficPolicy: Local
healthCheckNodePort: xxxxx
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerSourceRanges:
- x.x.x.x/32
ports:
- name: dashboard
port: 443
protocol: TCP
targetPort: 443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
And the nginx logs:
*2 access forbidden by rule, client: 10.X.X.X
My goal is to make a restriction endpoint based (to deny all and allow only specific clients)
You can use curl to query the ip from the load balance, this is an example curl 202.0.113.120 . Please notice that the service.spec.externalTrafficPolicy set to Local in GKE will force to remove the nodes without service endpoints from the list of nodes eligible for load balanced traffic; so if you are applying the Local value to your external traffic policy, you will have at least one Service Endpoint. So based on this, it is important to deploy the service.spec.healthCheckNodePort . This port needs to be allowed in the ingress firewall rule, you can get the health check node port from your yaml file with this command:
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
You can follow this guide if you need more information about how the service load balancer type works in GKE and finally you can limit the traffic from outside at your external load balancer deploying loadBalancerSourceRanges. In the following link, you can find more information related on how to protect your applications from outside traffic.

Kubernetes Ingress doesnt find/expose the application properly

I have one application on two environments, its been running for well over a year and now had to re-deploy it on one env and im left with half working external traffic.
example of working up
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my-app prod-app.my.domain <public ip e.g 41.30.20.20 . 80, 443 127d
and the not working one
MacBook-Pro% kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my-app dev-app.my.domain <for some reason priv addresses not public that I assigned?> 10.223.0.76,10.223.0.80,10.223.0.81,10.223.0.99 80, 443 5m5s
the deployments works like so, in helm I have the deployments,service etc. + kubernetes ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.deployment.name }}
namespace: {{ .Values.deployment.env }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
<some other annotatioins>
spec:
tls:
- secretName: {{ .Values.ingress.tlsSecretName.Games }}
rules:
- host: [prod,dev]-app.my.domain
http:
paths:
- path: /
backend:
serviceName: my-app
servicePort: {{ .Values.service.port }}
and before it I deployed the stable/nginx-ingress helm chart (yup, i know there is ingress-nginx/ingress-nginx - will migrate to it soon, but first want to bring back the env)
and the simple nginx config
controller:
name: main
tag: "v0.41.2"
config:
log-format-upstream: ....
replicaCount: 4
service:
externalTrafficPolicy: Local
updateStrategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 25% #max number of Pods can be unavailable during the update
type: RollingUpdate
# We want to disperse pods into the whole cluster, on each data node
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
# app label is set in the main deployment manifest
# https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-deployment.yaml#L6
values:
- nginx-ingress
- key: release
operator: In
values:
- my-app-ingress
topologyKey: kubernetes.io/hostname
any idea why my kubernetes ingress has private addresses not the assigned public one ?
and my services on prod are
my-app NodePort 10.190.173.152 <none> 8093:32519/TCP 127d 127d
my-app-ingress-stg-controller LoadBalancer 10.190.180.54 <PUB_IP> 80:30111/TCP,443:30752/TCP 26d
and on dev
my-app NodePort 10.190.79.119 <none> 8093:30858/TCP 10m
my-app-ingress-dev-main LoadBalancer 10.190.93.104 <PUB_IP> 80:32027/TCP,443:30534/TCP 10m
I kinda see the problem (cause I already tried migrating to new nginx a month ago, and on dev there is still old, but there were issues with having multiple envs on same dev cluster with ingresses) - I guess ill try to migrate to the new one and see if that somehow fixes the issue - other than that any idea why the priv addresses ?
Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was
Reserve public IP in gcp (my cloud provider)
Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
Deploy nginx-ingress pointing to that public address from the domain
if there is any mistake in my logic please say so and ill update it
#potatopotato I have just moved you own answer from initial question to community wiki separate answer. In that case it will be more searchable and indexing in
future searches.
Explanation regarding below
Not sure how it works but I deployed ingress (nginx-ingress helm
chart) after deploying the application helm chart and at first all
pods were 1/1 ready, and site didnt responde, and after ~10min it did
so ¯_(ツ)_/¯ no idea why it took so long
As per official documentation:
Note: It might take a few minutes for GKE to allocate an external IP address and prepare the load balancer. You might get errors like HTTP 404 and HTTP 500 until the load balancer is ready to serve the traffic.
your answer itself
Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was
Reserve public IP in gcp (my cloud provider)
Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
Deploy nginx-ingress pointing to that public address from the domain

Using Kuberenetes ingress controller as reverse proxy to other services in the cluster

I have a simple Kubernetes cluster on kops and aws, which is serving a web app, there is a single html page and a few apis. They are all running as services. I want to expose all endpoints(html and apis) publicly for the web page to work.
I have exposed the html service as an LoadBalancer and I am also using nginx-ingress controller. I want to use the same LoadBalancer to expose the other apis as well(using a different LoadBalancer for each service seems like a bad and expensive way), it is something that I was able to do using Nginx reverse proxy in the on-premise version of the same application, by giving different paths for each api in the nginx conf file.
Although I am not able to do the same in the cluster, I tried Service ingress but somehow I am not able to get the desired result, if I add a path, e.g. "path: "/mobiles-service"" and then add the specific service for it, the http requests do not somehow get redirected to the service. Only the html service works on the root path. Any help would be appreciated.
First you need to create controller for your Kops cluster running on AWS
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml
Then check if ingress-nginx service is created by running:
kubectl get svc ingress-nginx -n kube-ingress
Then create your pods and ClusterIP type services for your each app like sample below:
kind: Service
apiVersion: v1
metadata:
name: app1-service
spec:
selector:
app: app1
ports:
- port: <app-port>
Then create ingress rule file like sample below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: app1-service
servicePort: <app1-port>
- path: /app2
backend:
serviceName: app2-service
servicePort: <app2-port>
Once you deploy this ingress rule yaml, Kubernetes creates an Ingress resource on your cluster. The Ingress controller running in your cluster is responsible for creating an HTTP(S) Load Balancer to route all external HTTP traffic (on port 80) to the App Services in backend you exposed on specified pathes.
You can see newly created ingress rule by running:
kubectl get ingress
And you will see output like below:
NAME HOSTS ADDRESS PORTS AGE
example-ingress * a886e57982736434e9a1890264d461398-830017012.us-east-2.elb.amazonaws.com 80 1m
In relevant path like http://external-dns-name/app1 and http://external-dns-name/app2 you will access to your apps and in root / path, you will get <default backend - 404>

kubernetes expose nginx to static ip in gcp with ingress service configuration error

I had a couple of questions regarding kubernetes ingress service [/controllers]
For example I have an nginx frontend image that I am trying to run with kubectl -
kubectl run <deployment> --image <repo> --port <internal-nginx-port>.
Now I tried to expose this to the outer world with a service -
kubectl expose deployment <deployment> --target-port <port>.
Then tried to create an ingress service with the following nignx-ing.yaml -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: urtutorsv2ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "coreos"
spec:
backend:
serviceName: <service>
servicePort: <port>
Where my ingress.global-static-ip-name is correctly created & available
in Google cloud console.
[I am assuming the service port here is the port I want on my "coreos" IP , so I set it to 80 initially which didn't work so I tried setting it same as the specified in the first step but it still didn't work.]
So, the issue is I am not able to access the frontend at both the urls
http://COREOS_IP, http://COREOS_IPIP:
Which is why I tried to use -
kubectl expose deployment <deployment> --target-port <port>. --type NodePort
to see if it worked with a NodePort & I was able to access the frontend.
So, I am thinking there might be a configuration mistake here because of which I am not getting results with the ingress.
Can anyone here help debug / fix the issue ?
Yeah, the service is there. I tried to check the status with - kubectl get services, kubectl describe service k8urtutorsv2. It showed the service. I tried editing it & saved the nodeport value. the thing is it works with nodeport but not 80 or 443.
You cannot directly expose service on the port 80 or 443.
The available range of exposed services is predefined in the kube-api configuration by the service-node-port-range option with the default value 30000-32767.

kubernetes nginx ingress fails to redirect HTTP to HTTPS

I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.

Resources