There is a helm chart for nexus: https://github.com/helm/charts/tree/master/stable/sonatype-nexus
I installed it like using helm:
helm install stable/sonatype-nexus --name=nexus
But it didn't work because of nexus-proxy. There is logs for nexus-proxy container:
[vert.x-eventloop-thread-0]
[io.vertx.ext.web.impl.RoutingContextImplBase] Unexpected exception in
route
So, i started to google and found that post:
https://github.com/travelaudience/nexus-proxy/issues/4
There we no answer except this:
I encountered this error. Using imageTag=2.2.0 fixed the problem for
me.
So i deleted nexus release and installed that chart like so:
helm install stable/sonatype-nexus --name=nexus -f nexus.yml
nexus.yml is this file with replaced value of nexus-proxy image tag
https://github.com/helm/charts/blob/master/stable/sonatype-nexus/values.yaml
Now, when i hit http://localhost:8080/ i get this:
Invalid host. To browse Nexus, click here/. To use the Docker
registry, point your client at .
Tadaaam, what i did wrong?
I try to install this chart in my kubernetes on mac. I haven't succeed in installing this chart on GKE
I have met the same issue with you (in stable/sonatype-nexus-1.10.0) and I have tried to solve that. I guess your problem is due to docker images like quay.io/travelaudience/docker-nexus-proxy. You can see configuration in values.yaml like
nexusProxy:
imageName: quay.io/travelaudience/docker-nexus-proxy
imageTag: 2.3.0
imagePullPolicy: IfNotPresent
port: 8080
env:
nexusDockerHost: 127.0.0.1
nexusHttpHost: 127.0.0.1
enforceHttps: false
cloudIamAuthEnabled: false
By default, nexusDockerHost and nexusHttpHost is left blank, so the proxy will deny your access to nexus. to allow access nexus via docker-nexus-proxy. In my case, after I have added 127.0.0.1 to nexusDockerHost/nexusHttpHost, I could access nexus ui from the chart's nodeport configuration.
Worked for me! It created another ReplicaSet and I had to delete the original one in order to avoid a healthcheck failure in the new pod, but then it worked properly.
I have fixed this with following change, this seems to be port issue, I have deployed on AWS EKS private
nexusProxy:
enabled: true
# svcName: proxy-svc
imageName: quay.io/travelaudience/docker-nexus-proxy
imageTag: 2.6.0
imagePullPolicy: IfNotPresent
port: 8080
targetPort: 8080
change
nexusProxy:
enabled: true
# svcName: proxy-svc
imageName: quay.io/travelaudience/docker-nexus-proxy
imageTag: 2.6.0
imagePullPolicy: IfNotPresent
port: 8080
targetPort: 8081
Only change the port from 8080 to 8081
For testing purposes following configuration can be used on GKE via port forwarding.
In order to use on production (with ingress), you should use real host names.
ingress:
enabled: false
...
nexusProxy:
...
env:
nexusDockerHost: 127.0.0.1
nexusHttpHost: 127.0.0.1
...
Related
I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.
I have read various tutorials including
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
and multiple SO questions though the problem persists.
Here is an example configuration with which I am trying to setup kafka connectivity:
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses:
- ip: 10.132.0.5
ports:
- port: 9092
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
I am able to get some sort of response by connecting directly via nc 10.132.0.5 9092 from the node VM itself, but if I create a pod, say by kubectl run -it --rm --restart=Never alpine --image=alpine sh then I am unable to connect from within the pod using nc kafka 9092. All libraries in my code fail by timing out so it seems to be some kind of routing issue.
Kafka is given as an example, I am having the same issues connecting to other databases as well.
Solved it, the issue was within my understanding of how GCP operates.
To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was 10.52.0.0/24 address range.
Hope it helps someone.
I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444
As I was going over my recent experiments, I went over my notes to recreate a relatively simple setup using Kubernetes for a back-end and front-end service setup. In my scenario both of these services need to be exposed, and for now I'm doing that using NodePort.
This all worked quite nicely a week or so ago, but I think I managed to mess things up and this has me going nuts. The result is that I cannot seem to get access to my back-end pods via the service. I've followed along the Debug Service document (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service) and things are going haywire pretty quickly.
So this is my current yaml file:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: default
protocol: TCP
port: 80
targetPort: 8080
selector:
app: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: jan/test:v1.0.0
ports:
- containerPort: 8080
protocol: TCP
The application starts fine - it reports in the log it is ready for requests. (It is a Java/Grizzly application). Now here is a list of what I tried.
check kubectl services: it is there (for this example it is 172.17.0.4)
exec into the pod (alpine)
ifconfig - 172.17.0.4, 127.0.0.1
nslookup test 10.96.0.10 - works
(note without nameservice this will return
can't resolve '(null)' : Name does not resolve
ping 127.0.0.1 - works
wget http://127.0.0.1:8080 - responds fine
ping 172.17.0.4 - works
wget http://172.17.0.4:8080 - fails immediately, connection refused
wget -qO- test - fails after a while, operation times out
exec into another (busybox) pod
ifconfig - 172.17.0.8, 127.0.0.1
nslookup test - works
ping to pod 172.17.0.4 - works
wget http://172.17.0.8:8080 - fails immediately, connection refused
wget -qO- test - fails immediately, connection refused
Most importantly - I think that the wget -qO- {service} need to start reporting its pod, which currently it does not. Again - I went through the scenario of the Debug Service document and that completes without issues.
So what (else) could be wrong for that wget -qO- to fail?
So, let's see...You are in a busybox pod.
ifconfig - 172.17.0.8, 127.0.0.1
wget http://172.17.0.8:8080 - fails immediately, connection refused
What are you doing here? This is like to do localhost:8080. Of course you are getting connection refused. There is nothing serving on port 8080 of busybox.
wget -qO- test - fails immediately, connection refused
Same here. Now you are doing the request on port 80 of busybox, that again has nothing serving.
There is absolutely no way this configuration has ever worked. All you are doing is to do requests to yourself from within a busybox.
You need to do the request to a service that points to your app or directly to the pod that contains your app.
I removed an important property that was fed into the application. So actually the problem was not at all at the level of K8S. Essentially I was rendering my deployed application 'invisible'.
I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.
I've got a pod with 2 containers, both running nginx. One is running on port 80, the other on port 88. I have no trouble accessing the one on port 80, but can't seem to access the one on port 88. When I try, I get:
This site can’t be reached
The connection was reset.
ERR_CONNECTION_RESET
So here's the details.
1) The container is defined in the deployment YAML as:
- name: rss-reader
image: nickchase/nginx-php-rss:v3
ports:
- containerPort: 88
2) I created the service with:
kubectl expose deployment rss-site --port=88 --target-port=88 --type=NodePort --name=backend
3) This created a service of:
root#kubeclient:/home/ubuntu# kubectl describe service backend
Name: backend
Namespace: default
Labels: app=web
Selector: app=web
Type: NodePort
IP: 11.1.250.209
Port: <unset> 88/TCP
NodePort: <unset> 31754/TCP
Endpoints: 10.200.41.2:88,10.200.9.2:88
Session Affinity: None
No events.
And when I tried to access it, I used the URL
http://[nodeip]:31754/index.php
Now, when I instantiate the container manually with Docker, this works.
So anybody have a clue what I'm missing here?
Thanks in advance...
My presumtion is that you're using the wrong access IP. Are you trying to access the minion's IP address and port 31754?