I am trying to do an openstack deployment according to the book "openstack clouding computing cookbook2012". I did everything exactly the same as the book. Everything was fine until I ran the command:
euca-run-instances ami-00000002 -t m1.small -k openstack
to start an openstack instance.
After I ran this command, euca-describe-instances showed that the instance status was pending at first. But after a while, at the openstack computing node, I saw error message saying:
block nbd15: receive control failed (result -32)
Then euca-describe-instances showed the instance status was error.
I tried twice of the whole process (I mean start over from installing virtual machine), and the same result.
Can anybody help? I am now stuck here.
Sorry to request clarification, but what version of OpenStack are you using and what was the exact error message (please include the whole long line, perhaps with some context)? The text "receive control failed" does not appear in the nova codebase.
Related
I am trying to run jupyterhub on an Ubuntu 20.04 LTS server. My idea is to run python/jupyterhub in a conda virtual environment as a system service. As I want to be able to limit the resources available to individual users I installed the systemdspawner.
After installing everything and starting the jupyterhub service I can login through my web browser. However, when trying to start the server the spawner stucks and after a while I get an error message saying "Spawn failed: Timeout"
in journalctl I can see the following messages:
User logged in: me 302 POST /hub/login?next= -> /hub/spawn (me#::ffff:[my IP address]) 59.42ms
Adding role server to token: <APIToken('93c8...', user='me', client_id='jupyterhub')
Creating oauth client jupyterhub-user-me
pam_loginuid(login:session): Error writing /proc/self/loginuid: Operation not permitted
pam_loginuid(login:session): set_loginuid failed
pam_unix(login:session): session opened for user me by (uid=0)
Failed to open PAM session for me: [PAM Error 14] Cannot make/remove an entry for the specified session
Disabling PAM sessions from now on. user:me
Unit jupyter-me-singleuser in a failed state. Resetting state.
Disclaimer: My Jupyter/Python installation is replacing an former installation that was setup by someone else and got messed up a bit during time. I tried to remove everything related and start with a clean installation from scratch. However, as I had very little documentation about the old setup there is a certain risk that there might be some left-overs of the previous installation that may cause trouble.
Any ideas?
Solved it out myself. In the end the PAM related messages seem to be non-critical and were not related to the timeout at all. Instead I found a mistake in /etc/systemd/system/jupyterhub.service, where the PATH variable was not including the bin directory of my miniconda installation.
I'm trying to use microk8s for my KubernetesPodOperator in my dag. Unfortunately I can't seem get it to install consistently.
I'm using homebrew to install (or reinstall) microk8s and multipass. When I execute
microk8s install --cpu=4 --mem=10000
I get the errors:
launch failed: The following errors occurred:
qemu-system-aarch64: Error: HV_BAD_ARGUMENT
launch failed: instance "microk8s-vm" already exists
An error occurred with the instance when trying to launch with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.
(where launch failed: instance "microk8s-vm" already exists appears several times.)
I've tried reinstalling both several times and that doesn't appear to help. Any advice?
Turns out I need to be using microk8s install --cpu=4 --mem=10 not 10000. It wants GB, not MB. My bad. I wish the error message was a little clearer though.
The Jim Hester YouTube video video looks like a potentially really helpful way to put breakpoints in Rcpp code that passes some variables in unexpected ways. However, when I follow the start up directions (Rstudio console copied below) it fails with a message that is not helpful to a new user of lldb.
gcn#GCN-MacBook-Pro-7 ISIMIPData % R -d lldb
(lldb) target create "/Library/Frameworks/R.framework/Resources/bin/exec/R"
Current executable set to '/Library/Frameworks/R.framework/Resources/bin/exec/R' (x86_64).
(lldb) run
error: process exited with status -1 (attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries when the attached failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.))
(lldb)
The Console.app output that I could find is not very useful.
A later post post describes an updated process but it too fails in a similar way. I'm using Rstudio version 1.4.1717 with R version 4.1.
I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error:
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.
I'm really just looking to pick the community's brain for some leads in figuring out what is going on with the issue I'm having.
I'm writing a MR job with RHadoop (rmr2, v3.0.0) and things are great -- IO with HDFS, mapping, reducing. No problems. Life is great.
I'm trying to schedule the job with Apache Oozie, and am running into some issues:
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
I've read the rmr2 debugging guide, but nothing is really getting to the stderr because the job fails before anything even gets scheduled.
In my head, everything points to a difference in environments. However, Oozie is running the job as the same user that I'm able to run everything with via cli, and all of the R environment variables (fetched with Sys.getenv()) are the same, excepting there's some additional class path stuff set with Oozie.
I can post more of the OS or Hadoop versions and config details, but sleuthing some version-specific bugs seems like a bit of a red herring as everything runs fine at the command line.
Anybody have any thoughts what might be some helpful next steps in hunting this beast down?
UPDATE:
I overwrote the system function in the base package to log the user, the host name of the node, and the command being executed before the internal call to system. So before any system call is actually executed, I get something like the following in the stderr:
user#host.name
/usr/bin/hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-102.jar ...
When ran with Oozie, the command printed in the stderr fails with an exit status of 1. When I run the command on user#host.name, it runs successfully. So essentially the EXACT same command with the SAME user on the SAME node fails with Oozie, but runs successfully from cli.