Shiny failed to start - r

I had an application in my shiny-server working fine some months ago. Today I returned to it and got the most curious error. Whenever I try to access it, I just get this massage
An error has occurred
The application failed to start.
The application exited during initialization.
Now the natural step would be to go check the log of this error, right? But the Logs created by this error are just empty, 0 bytes. So I am really puzzled why this is happening, I also tried to run the shiny sample apps and get the same error, but the server itself seems to be running just fine.
I know this is sort of a vague question, but honestly I don't know what other info could I put here due to the empty logs and it could be that someone came across a similar issue

Shiny Server defaults to sanitised error logs, which is not useful in this case. You can change this behaviour by adding the line sanitize_errors off; to /etc/shiny-server/shiny-server.conf. You may need to restart Shiny Server to see the effect.
To open /etc/shiny-server/shiny-server.conf:
sudo nano /etc/shiny-server/shiny-server.conf
To restart Shiny Server:
sudo restart shiny-server
This should give you the verbose error messages you want.

Related

jupyterhub fails to spawn server with systemdspawner

I am trying to run jupyterhub on an Ubuntu 20.04 LTS server. My idea is to run python/jupyterhub in a conda virtual environment as a system service. As I want to be able to limit the resources available to individual users I installed the systemdspawner.
After installing everything and starting the jupyterhub service I can login through my web browser. However, when trying to start the server the spawner stucks and after a while I get an error message saying "Spawn failed: Timeout"
in journalctl I can see the following messages:
User logged in: me 302 POST /hub/login?next= -> /hub/spawn (me#::ffff:[my IP address]) 59.42ms
Adding role server to token: <APIToken('93c8...', user='me', client_id='jupyterhub')
Creating oauth client jupyterhub-user-me
pam_loginuid(login:session): Error writing /proc/self/loginuid: Operation not permitted
pam_loginuid(login:session): set_loginuid failed
pam_unix(login:session): session opened for user me by (uid=0)
Failed to open PAM session for me: [PAM Error 14] Cannot make/remove an entry for the specified session
Disabling PAM sessions from now on. user:me
Unit jupyter-me-singleuser in a failed state. Resetting state.
Disclaimer: My Jupyter/Python installation is replacing an former installation that was setup by someone else and got messed up a bit during time. I tried to remove everything related and start with a clean installation from scratch. However, as I had very little documentation about the old setup there is a certain risk that there might be some left-overs of the previous installation that may cause trouble.
Any ideas?
Solved it out myself. In the end the PAM related messages seem to be non-critical and were not related to the timeout at all. Instead I found a mistake in /etc/systemd/system/jupyterhub.service, where the PATH variable was not including the bin directory of my miniconda installation.

Why is microk8s failing to install?

I'm trying to use microk8s for my KubernetesPodOperator in my dag. Unfortunately I can't seem get it to install consistently.
I'm using homebrew to install (or reinstall) microk8s and multipass. When I execute
microk8s install --cpu=4 --mem=10000
I get the errors:
launch failed: The following errors occurred:
qemu-system-aarch64: Error: HV_BAD_ARGUMENT
launch failed: instance "microk8s-vm" already exists
An error occurred with the instance when trying to launch with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.
(where launch failed: instance "microk8s-vm" already exists appears several times.)
I've tried reinstalling both several times and that doesn't appear to help. Any advice?
Turns out I need to be using microk8s install --cpu=4 --mem=10 not 10000. It wants GB, not MB. My bad. I wish the error message was a little clearer though.

Kibana is stuck on "Kibana server is not ready yet"

After installing Inhanced-Table plugin, my Kibana is stuck on the following message
"Kibana server is not ready yet"
However after some minutes it completely became out of reach and I am facing with "Unable to connect" error on my browser.
I removed the plugin with the following command, but the error still exists.
./bin/kibana-plugin remove enhanced-table
Would you mind helping me in order to solve this problem. Also Kibana logfile is available via following link.
https://drive.google.com/file/d/1LILdo07Q9r0-VNG7hgkbTOaE2eJzhQPs/view?usp=sharing
Thanks
Best Regards
Looking through the log of your kibana, I've seen:
{ "type":"log",
"#timestamp":"2020-11-11T05:44:01Z",
"tags":["fatal","root"],"pid":2884,
"message":"{ Error: Optimizations failure.\n 9263 modules\n \n
ERROR in ./src/legacy/core_plugins/console/public/np_ready/application/components/editor_example.js\n
Module not found: Error: Can't resolve '../constants/help_example.txt' in 'D:\\Elastic\\Kibana\\kibana-7.6.1\\src\\legacy\\core_plugins\\console\\public\\np_ready\\application\\components'\n\n
So might be Kibana is still looking for the plugin, that's why the error occurred.
You should try stop kibana and start it again.
I installed a new clear kibana and problem fixed

R Studio crashing not at initialization. Error: Error occurred during transmission

Some context on my environment:
I am running R Studio in a docker container called rocker/verse.
I downloaded this dataset from Kaggle, which has about 470 MB.
When working with it, at some point RStudio restart. It does't happen after a specific call, and I've seen the same problem when working with other projects. Though it is not related to my code, I am posting it bellow.
library(data.table)
fraud<- fread("path.csv")
fraud1<- sort(sample(nrow(fraud), nrow(fraud)*.7))
train<- fraud[fraud1, ]
test<-fraud[-fraud1, ]
Usually on the console this message is printed:
Error: Error occurred during transmission
And, this pop up is also showed:
I have no idea what is causing it. I would appreciate any help.
Delete the .Rhistory files associated with the installation and any open project.
You have a problem with your user data files for Rstudio. Follow the hints given here: https://community.rstudio.com/t/rstudio-server-error-occurred-during-transmission/84258 and here: https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server.

Pintos - UserProg all tests fail is_kernel_vaddr()

I am doing the Pintos project on the side to learn more about operating systems. I had tons of devops trouble at first with it not running well on an 18.04 Ubuntu droplet. I am now running it on the VirtualBox image that UCCS tells students to download for pintos.
I finished project 1 and started to map out my solution to project 2. Following the instructions to create a file I ran
pintos-mkdisk filesys.dsk --filesys-size=2
pintos -- -f -q
but am getting error
Kernel PANIC at ../../threads/vaddr.h:87 in vtop(): assertion
`is_kernel_vaddr (vaddr)' failed.
I then tried running make check (all the tests). They are all failing for the same reason.
Am I missing something? Is there something I need to implement to fix this? I reread the instructions and didnt see anything?
Would appreciate help!
Thanks
I had a similar problem. My code for Project 1 ran fine, but I could not format the filesystem for Project 2.
The failure for me came from the following call chain:
thread_init() -> ... -> thread_schedule_tail() -> process_activate() -> pagedir_activate() -> vtop()
The problem is that init_page_dir is still NULL when pagedir_activate() is called. init_page_dir should have been initialized in paging_init() but this is called after thread_init().
The root cause was that my scheduler was being called too early, i.e. before the call to thread_start(). The reason for my problem was that I had built in a call to thread_yield() upon completion of every call to lock_release() which makes sense from a priority donation standpoint. Unfortunately, locks are used prior to the scheduler being ready! To fix this, I installed a flag called threading_started that bails in the first line of my thread_block() and thread_yield() functions if thread_start() has not yet been called.
Good luck!

Resources