Issues with persistent volume on DigitalOcean Kubernetes cluster - wordpress

Just created a managed 2-node Kubernetes (ver. 1.22.8) cluster on DigitalOcean (DO).
After installing WordPress using Bitnami Helm chart, and then installing a WP plugin, the site became unreachable.
Looking into DO K8s dashboard in the deployment section, the wordpress deployment shows the following error:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
AttachVolume.Attach failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
MountVolume.MountDevice failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50' output: "mke2fs 1.45.5 (07-Jan-2020)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50 does not exist and no size was specified.\n"
Readiness probe failed: HTTP probe failed with statuscode: 404
As I'm quite new to K8s, I don't know much how to troubleshoot this.
Any help would be much appreciated.
UPDATE
I was able to reproduce the error and found what triggers it.
WordPress Bitnami charts installs several WP plugins by default. As soon as I try to delete them, the error shows up and the persistent volume gets corrupted...
Is this maybe a bug or it's standard behavior?

Related

Airflow (LocalExecutor) - Docker :: Job is failing with Log file does not exist

Airflow version: 1.10.9
Executor : LocalExecutor
Docket Setup
when job runs sometime we are getting following error. I have searched in web, many people faced this issue in celeryExecutor but we are using LocalExecutor(Docker setup). How can I resolve this problem?
*** Log file does not exist: /home/ubuntu/airflow/airflow/logs/es_update_relevance_score/es_update_relevance_score/2020-05-14T16:26:06.062416+00:00/1.log
*** Fetching from: http://:8793/log/es_update_relevance_score/es_update_relevance_score/2020-05-14T16:26:06.062416+00:00/1.log
*** Failed to fetch log file from worker. Invalid URL 'http://:8793/log/es_update_relevance_score/es_update_relevance_score/2020-05-14T16:26:06.062416+00:00/1.log': No host supplied
Here is one approach I've seen when running the scheduler and webserver in their own containers and using LocalExecutor:
Mount a host log directory as a volume into both the scheduler and webserver containers:
volumes:
- /location/on/host/airflow/logs:/opt/airflow/logs
Make sure the user within the airflow containers (usually airflow) has permissions to read and write that directory. If the permissions are wrong you will see an error like the one in your post.
This probably won't scale beyond LocalExecutor usage though.

bootstrap clodufiy3.4 error occured

I installed cloudify3.4 according to the cloudify DOCS. When I install the manager, and executed like this:
# cfy bootstrap --install-plugins -p openstack-manager-blueprint.yaml -i openstack-manager-blueprint-inputs.yaml
an error occurred:
[ERROR] Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> Timed out trying to connect to 192.168.17.15 (tried 5 times)
I have already build a extern network 192.168.17.0/24 and I have already installed
cloudify_docker_plugin-1.3.2-py27-none-linux_x86_64-Ubuntu-trusty.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-redhat-Maipo.wgn
cloudify_host_pool_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn
cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-redhat-Maipo.wgn
So, how to solve this error? Thank you to everyone who helped me!
It seems that you can't connect the manager.
Please make sure that you have an ssh connection from the CLI to the manager.
Since you are bootstrapping an Openstack manager you should make sure to have an external IP if you are outside of Openstack or that the CLI is on the same network if you are on Openstack.

OpenStack - Web console connection refused

Just getting started with OpenStack.
got everything set up on a Ubuntu VM (under Parallels).
When I attempt to log into the browser console as admin (the password was set during the DevStack install) - I get:
HTTPConnectionPool(host='10.211.55.16', port=8774): Max retries exceeded with url: /v2/a586870bde4c4dfc993dc40cab8047b7/extensions (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
I am however able to run CLI commands such as keystone-tenant-list, and all others, on the actual server.
I made sure that I'm able to ping the virtual Ubuntu host from my Mac. When I first enter http://myhost.mydomain I do get a login page, but, as soon as I enter admin's credentials - I get this ugly (and super long error)
What things could I check to fix this?
Resolution:
1) Wiped clean my Ubuntu host
2) Followed set-by-step instructions here: http://www.stackgeek.com/guides/gettingstarted.html
Everything now works without a glitch.

Bad Request Error OpenStack

I am trying to create an Instance from command line using the command,
nova boot --config-drive=true --flavor 2 --key-name key1 --image c28bc1e8-a25f-413c-9e13-fecdd5d6f522 instance1
But I got this error,
ERROR (BadRequest): Network 00000000-0000-0000-0000-000000000000,
11111111-1111-1111-1111-111111111111 could not be found. (HTTP 400)
(Request-ID: req-6dd0352e-008a-40c4-91e2-454529712ba9)
Guide me how to resolve this problem.
I’m guessing you may have the rax_default_network_flags_python_novaclient_ext Python package installed, which automatically adds those networks to the request, but are not booting an instance in the Rackspace public cloud.
This can likely be resolved using the --no-service-net and --no-public arguments, or by uninstalling the above mentioned Python module.

Devstack - Changing IP address after installation

I have devstack installed on a ubuntu 12.04 and I could get logged into Dashboard , Now I changed the IP of my ubuntu machine. After changing the IP, I couldn't log into Dashboard anymore
I gets the following error message. I can see my old IP in the error message.
ConnectionError at /auth/login/
HTTPConnectionPool(host='OLD_IP_ADDRESS', port=35357): Max retries exceeded with url: /v2.0/tokens (Caused by <class 'socket.error'>: [Errno 113] No route to host)
Request Method: POST
Request URL: http://NEW_IP_ADDRESS/auth/login/
Django Version: 1.4.5
Exception Type: ConnectionError
Exception Value:
HTTPConnectionPool(host='OLD_IP_ADDRESS', port=35357): Max retries exceeded with url: /v2.0/tokens (Caused by <class 'socket.error'>: [Errno 113] No route to host)
Exception Location: /usr/local/lib/python2.7/dist-packages/requests/adapters.py in send, line 246
Python Executable: /usr/bin/python
Python Version: 2.7.3
Python Path:
['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
'/opt/stack/python-keystoneclient',
'/usr/local/lib/python2.7/dist-packages',
'/opt/stack/python-glanceclient/setuptools_git-1.0b1-py2.7.egg',
'/opt/stack/python-glanceclient',
'/opt/stack/python-cinderclient',
Is there a documented procedure available to change the IP address manually ?
My New IP doesn't have connection to internet so I wouldn't be able to redeploy devstack
Thanks guys for your answers..
I missed to update my answer, I fixed that issue in an easy way.
Solution is first run unstack.sh and then run stack.sh once more. It will do the necessary fix. Since I haven't made much progress with Devstack after installation it makes me more confident to run stack.sh
For the second time when you run stack.sh its not needed to connect to internet, So my issue is fixed.
Feel free to share your thoughts on this.
You will need to change the IP address hard-coded in OpenStack configuration files generated by devstack. They are stored in /etc/ and elsewhere.
http://xmodulo.com/2013/04/how-to-change-ip-address-after-openstack-installation-via-devstack.html
Here's a few steps I've taken to get back online.
backup the answers file...
cp packstack-answers-20130417.txt packstack-answers.txt-SAVE
replace ip addresses...
sed -i '/s/10\.10\.248\.11/10\.32\.70\.10/g' packstack-answers-20130417.txt
Delete the cinder loopback devices, installer fails if it exists
losetup -d /dev/loop0
List what's left mounted via the loop.
losetup -a
rm /var/lib/cinder/cinder-volumes
Now rerun the deploy scripts
packstack --answer-file=packstack-answers-20130417.txt
Fix up other IP addressing concerns with nova-manage in the CLI.
Should work from here.

Resources