run kubernetes containers without minikube or etc - nginx

I want to just run an nginx-server on kubernetes with the help of
kubectl run nginx-image --image nginx
but the error was thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
I then ran
kubectl run nginx-image --kubeconfig ~/.kube/config --image nginx
again thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
minikube start solves the problem but it is taking resources...
I just want to ask How can I run kubectl without minikube (or other such solutions) being started? Please tell me if it not possible
when I run kubectl get pods, I get two pods instead I just want one and I know it is possible since I had seen in some video tutorials.
Please help...

Kubectl is a command-line tool and it is responsible for communicating with Minikube. Kubectl allows you to run commands against Minikube. You can use Kubectl to deploy applications, inspect and manage resources, and view logs. When you execute this command
kubectl run nginx-image --image nginx
kubectl tries to connect to minikube and sends your request(run Nginx) to it. So if you stop minikube, kubectl can't communicate. So minikube is responsible to run Nginx and kubectl is just responsible to tell Minikube to run Nginx

I mean you need to install Kubernetes in order to use it. It’s not magic. If minikube isn’t to your liking there are many installers, try Docker Desktop or k3d.

Related

Nginx Ingress controller - Error when getting IngressClass nginx

I have a Kubernetes cluster v1.22.1 set up in bare metal CentOS. I am facing a problem when setting up Nginx Ingress controller following this link.
I followed exactly the same in step 1-3 but got a CrashLoopBackOff error in nginx ingress controller pod. I checked the logs of the pod and found below:
[root#dev1 deployments]# kubectl logs -n nginx-ingress nginx-ingress-5cd5c7549d-hw6l7
I0910 23:15:20.729196 1 main.go:271] Starting NGINX Ingress controller Version=1.12.1 GitCommit=6f72db6030daa9afd567fd7faf9d5fffac9c7c8f Date=2021-09-08T13:39:53Z PlusFlag=false
W0910 23:15:20.770569 1 main.go:310] The '-use-ingress-class-only' flag will be deprecated and has no effect on versions of kubernetes >= 1.18.0. Processing ONLY resources that have the 'ingressClassName' field in Ingress equal to the class.
F0910 23:15:20.774788 1 main.go:314] Error when getting IngressClass nginx: the server could not find the requested resource
I believe I have the IngressClass setup properly as shown in below:
[root#dev1 deployments]# kubectl get IngressClass
NAME CONTROLLER PARAMETERS AGE
nginx nginx.org/ingress-controller <none> 2m12s
So I have no idea why it said Error when getting IngressClass nginx. Can anyone shed me some lights please?
Reproduction and what happens
I created a one node cluster using kubeadm on CentOS 7. And got the same error.
You and I were able to proceed further only because we missed this command at the beginning:
git checkout v1.12.1
The main difference is ingress-class.yaml has networking.k8s.io/v1beta1 in v1.12.1 and networking.k8s.io/v1 in master branch.
After I went here for the second time and switched the branch, I immediately saw this error:
$ kubectl apply -f common/ingress-class.yaml
error: unable to recognize "common/ingress-class.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1beta1"
That looks like other resources are not updated to be used on kubernetes v1.22+ yet.
Please see deprecated migration guide - v1.22 - ingress
How to proceed further
I tested exactly the same approach on a cluster with v1.21.4 and it worked like a charm. So you may consider downgrading the cluster.
If you're not tight to using NGINX ingress controller (supported by Nginx inc, you can try ingress nginx which is developed by kubernetes community. I tested it on v1.22, it works fine. Please find
Installation on bare metal cluster.
P.S. It may be confusing, but there are two free nginx ingress controllers which are developed by different teams. Also there's a third option - NGINX Plus which is paid and has more option. Please see here the difference

Ignore https gcloud composer airflow

I am running a gcloud composer command from behind a proxy how do I set it to ignore https.
gcloud composer environments run $project --location $location list_dags
Unable to connect to the server: x509: cannot validate certificate for X.X.X.X because it doesn't contain any IP SANs
Up to current versions of Cloud Composer (1.9.0 at the time of writing), the gcloud composer environments run command works by connecting to the Kubernetes master of your environment's GKE cluster. The error means there is something is potentially wrong with the configuration given to you by the GKE API, or there is something intercepting your traffic (like a non-transparent proxy on a corporate network). You should verify that you can connect to the Kubernetes master using kubectl and resolve any issues with that before trying to use the Composer command.
To connect using kubectl, obtain cluster credentials and then try issuing a few commands:
gcloud container clusters get-credentials --zone=$Z $COMPOSER_CLUSTER_NAME
kubectl get pods
To answer your question directly, you can use --insecure-skip-tls-verify can fix your issue if you use kubectl, but this option cannot be passed to gcloud composer.

Configure Docker to use a proxy server

I have installed docker on windows , when I try to run hello-world for testing on docker. I get following error
Unable to find image
My computer is using proxy server for communication. I need to configure that server in the docker. I know proxy server address and port. Where I need to update this setting. I tried using https://docs.docker.com/network/proxy/#set-the-environment-variables-manually.
It is not working.
Try setting the proxy. Right click on the docker icon in system tray, go to settings, proxy and add the below settings:
"HTTPS_PROXY=http://<username>:<password>#<host>:<port>"
If you are looking to set a proxy on Linux, see here
The answer of Alexandre Mélard at question Cannot download Docker images behind a proxy works, here is the simplified version:
Find out the systemd script or init.d script path of the docker service by running:service docker status or systemctl status docker, for example in Ubuntu16.04 it's at /lib/systemd/system/docker.service
Edit the script for example sudo vim /lib/systemd/system/docker.service by adding the following in the [Service] section:
Environment="HTTP_PROXY=http://<proxy_host>:<port>"
Environment="HTTPS_PROXY=http://<proxy_host>:<port>"
Environment="NO_PROXY=<no_proxy_host_or_ip>,<e.g.:172.10.10.10>"
Reload and restart the daemon: sudo systemctl daemon-reload && sudo systemctl restart docker or sudo service docker restart
Verify: docker info | grep -i proxy should show something like:
HTTP Proxy: http://10.10.10.10:3128
HTTPS Proxy: http://10.10.10.10:3128
This adds the proxy for docker pull, which is the problem of the question. If for running or building docker a proxy is needed, either configure ~/.docker/config as the official docs explained, or change Dockerfile so there is proxy inside the container.
I had the same problem on a windows server and solved the problem by setting the environment variable HTTP_PROXY on powershell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
And then restarting docker:
Restart-Service docker
More information at Microsoft official proxy-configuration guide.
Note: The error returned when pulling image, with version 19.03.5, was connection refused.

helm not working on rancher with kubernetes

We followed the quick start guide for the Rancher with kubernetes environment, and followed all the steps and exercises from this ebook.
Everything was Beautiful, with one exception: helm chart manager is not working.
We found this issue that had a lot of people talking about nginx configurations that apparently solved it, but it did not for us.
When we run helm like:
> helm install --name prom-release stable/prometheus
It returns:
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 35.227.80.81:10250: getsockopt: connection timed out
We appreciate the help!
http://rancher.com/docs/rancher/v1.6/en/kubernetes/addons/#helm
Using helm in the Rancher UI
Rancher provides shell access directly to a managed kubectl instance that can be used to manage Kubernetes clusters and applications. To start using this shell, navigate to Kubernetes -> CLI. This shell is automatically installed with a Helm client and commands for Helm can be used immediately.

how to access local kubernetes minikube dashboard remotely

Kubernetes newbie (or rather basic networking) question:
Installed single node minikube (0.23 release) on a ubuntu box running in my lan (on IP address 192.168.0.20) with virtualbox.
minikube start command completes successfully as well
minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
minikube dashboard also comes up successfully. (running on 192.168.99.100:30000)
what i want to do is access minikube dashboard from my macbook (running on 192.168.0.11) in the same LAN.
Also I want to access the same minikube dashboard from the internet.
For LAN Access:
Now from what i understand i am using virtualbox (the default vm option), i can change the networking type (to NAT with port forwarding) using vboxnet command
VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,,22"
as listed here
In my case it will be something like this
VBoxManage modifyvm "VM name" --natpf1 "guesthttp,http,,30000,,8080"
Am i thinking along the right lines here?
Also for remotely accessing the same minikube dashboard address, i can setup a no-ip.com like service. They asked to install their utility on linux box and also setup port forwarding in the router settings which will port forward from host port to guest port. Is that about right? Am i missing something here?
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
#Jeff provided the perfect answer, put more hints for newbies.
Start a proxy using #Jeff's script, as default it will open a proxy on '0.0.0.0:8001'.
kubectl proxy --address='0.0.0.0' --disable-filter=true
Visit the dashboard via the link below:
curl http://your_api_server_ip:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
More details please refer to the officially doc.
I reached this url with search keywords: minikube dashboard remote.
In my case, minikube (and its dashboard) were running remotely and I wanted to access it securely from my laptop.
[my laptop] --ssh--> [remote server with minikube]
Following gmiretti's answer, my solution was local forwarding ssh tunnel:
On minikube remote server, ran these:
minikube dashboard
kubectl proxy
And on my laptop, ran these (keep localhost as is):
ssh -L 12345:localhost:8001 myLogin#myRemoteServer
The dashboard was then available at this url on my laptop:
http://localhost:12345/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
The ssh way
Assuming that you have ssh on your ubuntu box.
First run kubectl proxy & to expose the dashboard on http://localhost:8001
Then expose the dashboard using ssh's port forwarding, executing:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
Now you should access the dashboard from your macbook in your LAN pointing the browser to http://192.168.0.20:30000
To expose it from outside, just expose the port 30000 using no-ip.com, maybe change it to some standard port, like 80.
Note that isn't the simplest solution but in some places would work without having superuser rights ;) You can automate the login after restarts of the ubuntu box using a init script and setting public key for connection.
I had the same problem recently and solved it as follows:
Get your minikube VM onto the LAN by adding another network adapter in bridge network mode. For me, this was done through modifying the minikube VM in the VirtualBox UI and required VM stop/start. Not sure how this would work if you're using hyperkit. Don't muck with the default network adapters configured by minikube: minikube depends on these. https://github.com/kubernetes/minikube/issues/1471
If you haven't already, install kubectl on your mac: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Add a cluster and associated config to the ~/.kube/config as below, modifying the server IP address to match your newly exposed VM IP. Names can also be modified if desired. Note that the insecure-skip-tls-verify: true is needed because the https certificate generated by minikube is only valid for the internal IP addresses of the VM.
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://192.168.0.101:8443
name: mykubevm
contexts:
- context:
cluster: mykubevm
user: kubeuser
name: mykubevm
users:
- name: kubeuser
user:
client-certificate: /Users/myname/.minikube/client.crt
client-key: /Users/myname/.minikube/client.key
Copy the ~/.minikube/client.* files referenced in the config from your linux minikube host. These are the security key files required for access.
Set your kubectl context: kubectl config set-context mykubevm. At this point, your minikube cluster should be accessible (try kubectl cluster-info).
Run kubectl proxy http://localhost:8000 to create a local proxy for access to the dashboard. Navigate to that address in your browser.
It's also possible to ssh to the minikube VM. Copy the ssh key pair from ~/.minikube/machines/minikube/id_rsa* to your .ssh directory (renaming to avoid blowing away other keys, e.g. mykubevm & mykubevm.pub). Then ssh -i ~/.ssh/mykubevm docker#<kubevm-IP>
Thanks for your valuable answers, If you have to use the kubectl proxy command unable to view permanently, using the below "Service" object in YAML file able to view remotely until you stopped it. Create a new yaml file minikube-dashboard.yaml and write the code manually, I don't recommend copy and paste it.
apiVersion : v1
kind: Service
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard-test
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
nodePort: 30000
selector:
app: kubernetes-dashboard
type: NodePort
Execute the command,
$ sudo kubectl apply -f minikube-dashboard.yaml
Finally, open the URL:
http://your-public-ip-address:30000/#!/persistentvolume?namespace=default
Slight variation on the approach above.
I have an http web service with NodePort 30003. I make it available on port 80 externally by running:
sudo ssh -v -i ~/.ssh/id_rsa -N -L 0.0.0.0:80:localhost:30003 ${USER}#$(hostname)
Jeff Prouty added useful answer:
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
But for me it didn't worked initially.
I run this command on the CentOS 7 machine with running kubectl (local IP: 192.168.0.20).
When I tried to access dashboard from another computer (which was in LAN obviously):
http://192.168.0.20:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
then only timeout was in my web browser.
The solution for my case is that in CentOS 7 (and probably other distros) you need to open port 8001 in your OS firewall.
So in my case I need to run in CentOS 7 terminal:
sudo firewall-cmd --zone=public --add-port=8001/tcp --permanent
sudo firewall-cmd --reload
And after that. It works! :)
Of course you need to be aware that this is not safe solution, because anybody have access to your dashbord now. But I think that for local lab testing it will be sufficient.
In other linux distros, command for opening ports in firewall can be different. Please use google for that.
Wanted to link this answer by iamnat.
https://stackoverflow.com/a/40773822
Use minikube ip to get your minikube ip on the host machine
Create the NodePort service
You should be able to access the configured NodePort id via < minikubeip >:< nodeport >
This should work on the LAN as well as long as firewalls are open, if I'm not mistaken.
Just for my learning purposes I solved this issue using nginx proxy_pass. For example if the dashboard has been bound to a port, lets say 43587. So my local url to that dashboard was
http://127.0.0.1:43587/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Then I installed nginx and went to the out of the box config
sudo nano /etc/nginx/sites-available/default
and edited the location directive to look like this:
location / {
proxy_set_header Host "localhost";
proxy_pass http://127.0.0.1:43587;
}
then I did
sudo service nginx restart
then the dashboard was available from outside at:
http://my_server_ip/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/cronjob?namespace=default

Resources