replicationcontroller not showed in kubernetes - nginx

I've just installed Kuberentes on Vagrant (one master and one node).
I deployed 3 nginx pods:
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
They were all running fine. I decided to delete one pod and a new pod was recreated immediatly (what I did expect). So it works fine. But the problem is that I can't see that replicationcontroller?
$ ./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-2494149703-b18av 1/1 Running 0 22h
my-nginx-2494149703-l40qy 1/1 Running 0 22h
my-nginx-2494149703-tcw5v 1/1 Running 0 32m
but for rc nothing was showed
$ ./cluster/kubectl.sh get rc
$

You might be running nginx in a different namespace. Try ./cluster/kubectl.sh get rc --all-namespaces.

Related

Apache Pulsar Admin UI Login Issue - M1 Mac - Docker-Desktop

I am having an issue logging into the Pulsar Manager UI running on my k8s cluster in docker-desktop on my M1 Mac.
When I try to login, I am unable to progress past the login page with the default pulsar admin credentials and when I inspect the page I see the following:
Failed to load resource: the server responded with a status of 404 (Not Found)
I think the issue has something to do with me being unable to connect with the backed service over port 7750 but I am honestly not sure how I can resolve that. I have deployed using the helm chart and used the minikube.yaml file values to keep replica counts and such down since it's running on the single docker-destop node.
Has anyone encountered this issue before or know of a solution?
If this issue has already come up here, I would love a link to that thread!
Below I have included some details of what's running in my cluster, the other values are all the same as what's included in the helm chart.
Services:
k get svc -n pulsar
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pulsar-mini-bookie ClusterIP None <none> 3181/TCP,8000/TCP 21h
pulsar-mini-broker ClusterIP None <none> 8080/TCP,6650/TCP 21h
pulsar-mini-proxy LoadBalancer 10.102.192.239 localhost 80:30132/TCP,6650:30925/TCP 21h
pulsar-mini-pulsar-manager LoadBalancer 10.98.70.14 localhost 9527:30322/TCP 21h
pulsar-mini-toolset ClusterIP None <none> <none> 21h
pulsar-mini-zookeeper ClusterIP None <none> 8000/TCP,2888/TCP,3888/TCP,2181/TCP 21h
Output of csrf command, showing that connection is refused from 7750, even if I try to use kubectl port-forward of the pulsar-mini-pulsar-manager pod (though perhaps this isn't the correct way to do it):
% CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
curl \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN;" \
-H 'Content-Type: application/json' \
-X PUT http://localhost:7750/pulsar-manager/users/superuser \
-d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username#test.org"}'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to localhost port 7750: Connection refused
curl: (7) Failed to connect to localhost port 7750: Connection refused
When I run the bin/pulsar-admin commands from my local machine, things work just fine. I simply can't access the commands or the UI for some reason.
Output of some commands below:
$ bin/pulsar-admin topics list-partitioned-topics apache/pulsar
"persistent://apache/pulsar/test-topic"
/apache-pulsar-2.9.1
$ bin/pulsar-admin namespaces list apache
"apache/pulsar"
"apache/tester"
$ bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic-2 -p 4
$ bin/pulsar-admin topics list-partitioned-topics apache/pulsar
"persistent://apache/pulsar/test-topic"
"persistent://apache/pulsar/test-topic-2"
Is that the correct port? What is in your PM configuration? Anything in the logs?
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
9527:30322/TCP
https://pulsar.apache.org/docs/en/administration-pulsar-manager/
in docker we have to specify the second port.
docker run -it
-p 9527:9527 -p 7750:7750
-e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties
apachepulsar/pulsar-manager:v0.2.0
You only have the 9527 port specific

Kubernetes nginx ingress redirect domains to cluster

I want to redirect two namecheap domains testA.com and testB.com to two different services (websites) on my raspberry pi cluster.
I set everything up using an updated form from this guide. This means that k3s, metalb, nginx ingress and cert-manager are fully deployed and working.
% kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metallb-speaker-bsxfg 1/1 Running 1 30h
kube-system metallb-speaker-6pwsb 1/1 Running 1 30h
kube-system nginx-ingress-ingress-nginx-controller-7cc994599f-db285 1/1 Running 1 28h
cert-manager cert-manager-7998c69865-754mr 1/1 Running 2 27h
kube-system metallb-speaker-z8p97 1/1 Running 1 30h
webserver httpd-554794f9fd-npd4g 1/1 Running 1 21h
kube-system metallb-controller-df647b67b-2khlr 1/1 Running 1 30h
kube-system coredns-854c77959c-dl74f 1/1 Running 2 33h
cert-manager cert-manager-webhook-7d6d4c78bc-97g2g 1/1 Running 1 27h
kube-system metrics-server-86cbb8457f-2vqmt 1/1 Running 3 33h
cert-manager cert-manager-cainjector-7b744d56fb-bvwjd 1/1 Running 2 27h
kube-system local-path-provisioner-5ff76fc89d-vbqs9 1/1 Running 4 33h
% kubectl get services -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.43.116.250 <none> 443/TCP 28h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
nginx-ingress-ingress-nginx-controller LoadBalancer 10.43.10.136 192.168.178.240 80:31517/TCP,443:31733/TCP 28h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
The guide describes it for dynDNS. How should I do this if I had two domains and two different websites. Is this done with a containerised certbot? Or do I need CNAME?
You can run the command and check for the LoadBalancer IP which is external or exposed to internet IP.
you can this IP to DNS side as A record or CNAME record and you are done. Your both domain will be pointing the traffic to the Kubernetes cluster and inside the Kubernetes, you can create the ingress routes or record to divert the traffic to a specific service.

When minikube on Mac is asked for URL, why does it instead start a service in a tunnel?

I installed the latest Docker, Minikube, and kubectl into my Mac (Catalina). I also have a recent MySQL, with the command line properly installed in the PATH. I'm using the stock terminal (zsh).
Docker started just fine, tells me of the pods it has installed.
Minikube starts fine, and kubectl get all reports on its artifacts just fine.
Jeromes-MacBook-Pro:cloudnative-statelessness jerome$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-7dbfd4dbc4-sz8ps 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
service/mysql-svc NodePort 10.111.176.15 <none> 3306:30022/TCP 15m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-7dbfd4dbc4 1 1 1 15m
When I run minikube service mysql-svc --url I'm expecting to get a URL, like this one from another machine: http://192.168.99.101:31067 . Instead I see something about starting a service in a 'tunnel':
Jeromes-MacBook-Pro:cloudnative-statelessness jerome$ minikube service mysql-svc --url
🏃 Starting tunnel for service mysql-svc.
|-----------|-----------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------|-------------|------------------------|
| default | mysql-svc | | http://127.0.0.1:64966 |
|-----------|-----------|-------------|------------------------|
http://127.0.0.1:64966
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
At this point the terminal is non-responsive.
I'm believing that minikube service SERVICENAME should try to start a service, and also return that block of text. I'm also believing that the --url suffix should merely returns what is in the URL column, and skip starting a service.
Any good explanations of how I can get the result I want on my Mac?
And BTW, how do I recover control of the terminal session once it states "Because..." ?
Thanks,
Jerome.
UPDATE ON 8/14/2020:
I took Saravanan's advice. I uninstalled Docker from my Mac and used homebrew to install docker + docker-machine + virtualbox (see https://www.robinwieruch.de/docker-macos). When I run "minikube service mysql-svc --url" I no longer get the tunnel problem. Thank you, Saravanan.
My problems have morphed into getting a correct version of my containers (compiled apps, then run thru docker build) from Docker Hub. The YAML file I have points at my account there, but I'm afraid I've an obsolete version. What do I do to overwrite my current version on my Mac, or to delete the Docker containers so that kubectl create can get the updated version?
The reason for this is your minikube image is running in the container.
Try changing the configuration to run it in the virtual box. Then you can reach your sql pod without tunneling.
# first delete the existing minikube image
$ minikube delete
# change the minikube driver to virtualbox
$ minikube config set vm-driver virtualbox
# start minikube again
$ minikube start
Ensure you have virtual box installed before proceeding

How to check ingress controller version on minikube kubernetes cluster

Documentation says that I need to enter pod, but I can't.
sudo kubectl get pods -n kube-system gives me following output:
coredns-66bff467f8-bhwrx 1/1 Running 4 10h
coredns-66bff467f8-ph2pb 1/1 Running 4 10h
etcd-ubuntu-xenial 1/1 Running 3 10h
ingress-nginx-admission-create-mww2h 0/1 Completed 0 4h48m
ingress-nginx-admission-patch-9dklm 0/1 Completed 0 4h48m
ingress-nginx-controller-7bb4c67d67-8nqcw 1/1 Running 1 4h48m
kube-apiserver-ubuntu-xenial 1/1 Running 3 10h
kube-controller-manager-ubuntu-xenial 1/1 Running 3 10h
kube-proxy-hn9qw 1/1 Running 3 10h
kube-scheduler-ubuntu-xenial 1/1 Running 3 10h
storage-provisioner 1/1 Running 4 10h
When I trying to enter sudo kubectl exec ingress-nginx-controller-7bb4c67d67-8nqcw -- /bin/bash/ I receive following error:
Error from server (NotFound): pods "ingress-nginx-controller-7bb4c67d67-8nqcw" not found
Reason why I'm running everything with sudo is because I'm using vm-dirver=none
Reason why I need to know ingress controller version is because I want to use a wildcard in host name to forward multiple subdomains to same service/port. And I know that this feature is available only from ingress controller version 1.18.
You get that error because you are not passing the namespace parameter (-n kube-system).
And to get the version, you would do this:
kubectl get po ingress-nginx-controller-7bb4c67d67-8nqcw -n kube-system -oyaml | grep -i image:

error: unable to upgrade connection: container not found ("wordpress")

My goal is to list environment variables in my wordpress pod
kubectl get pods
wordpress-77f45f895-lxh5t 1/1 Running 993 92d
wordpress-mysql-7d4fc77fdc-x4bfm 1/1 Running 87 92d
Although the pod is running
kubectl exec wordpress-77f45f895-lxh5t env
error: unable to upgrade connection: container not found ("wordpress")
If I try the other one
kubectl exec wordpress-mysql-7d4fc77fdc-x4bfm env
Unable to connect to the server: net/http: TLS handshake timeout
My services
wordpress NodePort 10.102.29.45 <none> 80:31262/TCP 94d
wordpress-mysql ClusterIP None <none> 3306/TCP 94d
Why is container not found?
by looking at your output, I think your containers are crashing. first pod crashed 993 times and the second one crashed 87 times. You can check logs of the containers/events of the pods
kubectl logs {{podname}}: for pod logs
kubectl describe pod {{podname}} for detailed description.
As suggested by #mdaniel in comment check for ports also.
Are you able to access application on nodePort?
The problem may be that the container hasn't been started yet. Use
kubectl describe pod <podname>
to see if you can view a message such as:
Normal Created 19s kubelet Created container container
Normal Started 17s kubelet Started container container
The container will have been created and you should no longer see that error message.

Resources