Why is microk8s failing to install? - apple-m1

I'm trying to use microk8s for my KubernetesPodOperator in my dag. Unfortunately I can't seem get it to install consistently.
I'm using homebrew to install (or reinstall) microk8s and multipass. When I execute
microk8s install --cpu=4 --mem=10000
I get the errors:
launch failed: The following errors occurred:
qemu-system-aarch64: Error: HV_BAD_ARGUMENT
launch failed: instance "microk8s-vm" already exists
An error occurred with the instance when trying to launch with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.
(where launch failed: instance "microk8s-vm" already exists appears several times.)
I've tried reinstalling both several times and that doesn't appear to help. Any advice?

Turns out I need to be using microk8s install --cpu=4 --mem=10 not 10000. It wants GB, not MB. My bad. I wish the error message was a little clearer though.

Related

jupyterhub fails to spawn server with systemdspawner

I am trying to run jupyterhub on an Ubuntu 20.04 LTS server. My idea is to run python/jupyterhub in a conda virtual environment as a system service. As I want to be able to limit the resources available to individual users I installed the systemdspawner.
After installing everything and starting the jupyterhub service I can login through my web browser. However, when trying to start the server the spawner stucks and after a while I get an error message saying "Spawn failed: Timeout"
in journalctl I can see the following messages:
User logged in: me 302 POST /hub/login?next= -> /hub/spawn (me#::ffff:[my IP address]) 59.42ms
Adding role server to token: <APIToken('93c8...', user='me', client_id='jupyterhub')
Creating oauth client jupyterhub-user-me
pam_loginuid(login:session): Error writing /proc/self/loginuid: Operation not permitted
pam_loginuid(login:session): set_loginuid failed
pam_unix(login:session): session opened for user me by (uid=0)
Failed to open PAM session for me: [PAM Error 14] Cannot make/remove an entry for the specified session
Disabling PAM sessions from now on. user:me
Unit jupyter-me-singleuser in a failed state. Resetting state.
Disclaimer: My Jupyter/Python installation is replacing an former installation that was setup by someone else and got messed up a bit during time. I tried to remove everything related and start with a clean installation from scratch. However, as I had very little documentation about the old setup there is a certain risk that there might be some left-overs of the previous installation that may cause trouble.
Any ideas?
Solved it out myself. In the end the PAM related messages seem to be non-critical and were not related to the timeout at all. Instead I found a mistake in /etc/systemd/system/jupyterhub.service, where the PATH variable was not including the bin directory of my miniconda installation.

Mlflow not running on machine

Please I am trying to run mlflow code in R after having installed it. However, after loading the library with library(mlflow) and I run mlflow_log_params("foo",42) I get the error message below printed in my console:
Error in rethrow_call(c_processx_exec, command, c(command, args), pty, :
Command 'C:/Users/IFEANYI/AppData/Local/r-miniconda/envs/r-mlflow-1.19.0/mlflow' not found #win/processx.c:982 (processx_exec)
I also get the same error message when I run mlflow_ui(). Please was there something I ought to have done during installation failure of which is affecting its functionality? Do I need to install and load the processx library in order for mlflow to run on my Windows10 machine? I really hope I can get advice to help me because I want to use mlflow in my machine learning projects. Thanks in advance of your generous help.
The error should disappear when setting MLFLOW_BIN system variable (Windows) to mlflow cli executable : "....conda\envs\r-mlflow-1.24.0\Scripts\mlflow.exe".
If it works please mark the problem as resolved.
Unfortunately, you will get next error "Error in wait_for(function() mlflow_rest("experiments", "list", client = client)" for which I cannot find the solution

Openstack placement-status upgrade check giving not initialized error

I am trying to install openstack Wallaby on Ubuntu 20.04. I already installed Keystone and Glance and they work as expected. But after I installed Placement and tried to verify it using 'placement-status upgrade check' I constantly get the same error.
Error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/oslo_upgradecheck/upgradecheck.py", line 196, in run
return conf.command.action_fn()
File "/usr/lib/python3/dist-packages/oslo_upgradecheck/upgradecheck.py", line 104, in check
result = func_name(self, **kwargs)
File "/usr/lib/python3/dist-packages/oslo_upgradecheck/common_checks.py", line 41, in check_policy_json
policy_path = conf.find_file(conf.oslo_policy.policy_file)
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2543, in find_file
raise NotInitializedError()
oslo_config.cfg.NotInitializedError: call expression on parser has not been invoked
Is this normal at this stage or am I doing something wrong here?
I already checked the database connections (user and password work and database is made). And I also checked the username, password and url options in keystone_authtoken in placement.conf but I can't find what's wrong.
I also encountered this problem with Wallaby on Ubuntu 20.04. I solved it with installing Placement from PyPI instead of Ubuntu package manager so far. You should consider how implement starting Placement automatically if you install Placement from this instruction.
Install and configure Placement from PyPI
https://docs.openstack.org/placement/wallaby/install/from-pypi.html
I had the same problem so changed to Victoria, same issue. Digging about in the docs though I found the issue. The command that populate the database looks similar for Keystone, Glance and Placement but for placement the command should be 'su -s /bin/sh -c "placement-manage db sync" placement'. Notice that for placement it is 'db sync' not 'db_sync' as it is for the others. I created scripts, well actuall am using ansible and just cut and pasted between them and my guess is you have done the same. Basically as it does not run the database is empty hence the error.
Guy

Getting: "Failed to fetch metadata" when starting up Jupyter

I am using JupyterLab as my IDE and I have a couple of packages installed. Namely, the 'jupyterlab-dash' and 'juptyerlab-plotly' packages. My issue is when I go to launch 'juptyer lab' from terminal, I notice the following error:
Failed to fetch package metadata for 'jupyterlab-dash': URLError(gaierror(8, 'nodename nor servname provided, or not known'))
I'm not sure why this error is popping up but I've noticed this error only pops up when I have no internet connection (working on a train etc.). Could it be because these packages are trying to call something as I launch and because I don't have an internet connection it raises the error?
In the end I am able to use JupyterLab and the packages as intended (at-least it seems), but I'm curious why this error to 'fetch metadata' appears?
Thanks,

openstack nbd15 error information

I am trying to do an openstack deployment according to the book "openstack clouding computing cookbook2012". I did everything exactly the same as the book. Everything was fine until I ran the command:
euca-run-instances ami-00000002 -t m1.small -k openstack
to start an openstack instance.
After I ran this command, euca-describe-instances showed that the instance status was pending at first. But after a while, at the openstack computing node, I saw error message saying:
block nbd15: receive control failed (result -32)
Then euca-describe-instances showed the instance status was error.
I tried twice of the whole process (I mean start over from installing virtual machine), and the same result.
Can anybody help? I am now stuck here.
Sorry to request clarification, but what version of OpenStack are you using and what was the exact error message (please include the whole long line, perhaps with some context)? The text "receive control failed" does not appear in the nova codebase.

Resources