I have just upgraded Jupyter to the version 4.3.1
While I can open previously created ipynb files, I cannot create new ones.
When I try to create a new notebook file, I get a pop up windows saying:
Creating Notebook Failed
An error occurred while creating a new notebook
Forbidden
In the terminal I notice this output:
[W 12:53:23.375 NotebookApp] 403 POST /api/contents (::1): '_xsrf' argument missing from POST
[W 12:53:23.383 NotebookApp] 403 POST /api/contents (::1) 8.92ms referer=http://localhost:8888/tree?token=e7fbbb58516dc1359fcc26a1079093166a1f713ee5b94ccd
I use Jupyter with Python 3.5.2 and IPython 5.1.0
Another alternative to confirm the issue is to open your Jupyter Session in another browser and you might be redirected to a screen like the following:
If you open a new console and type
jupyter notebook list
You'll see your current notebook and the URL will contain a token. Open that in a new tab and problem solved.
Output command should look like this:
Currently running servers:
http://localhost:8888/?token=cbad1a6ce77ae284725a5e43a7db48f2e9bf3b6458e577bb :: <path to notebook>
I had to enable cookies in the browser (which I had intentionally disabled). Then the "Forbidden" error disappeared, everything is OK now.
The generally accepted solution to prevent XSRF is to cookie every user with an unpredictable value and include that value as an additional argument with every form submission on your site.
From: http://tornado.readthedocs.io/en/latest/guide/security.html#cross-site-request-forgery-protection
Jupyter blocks non-local requests. To access Jupyter from an external address we can execute it with the following parameters:
jupyter notebook --NotebookApp.allow_origin=* --NotebookApp.allow_remote_access=1
I had this problem just now, but I noticed that it worked in Edge. Deleting all browser cache including cookies in Chrome solved this in my case.
Related
I am trying to run jupyterhub on an Ubuntu 20.04 LTS server. My idea is to run python/jupyterhub in a conda virtual environment as a system service. As I want to be able to limit the resources available to individual users I installed the systemdspawner.
After installing everything and starting the jupyterhub service I can login through my web browser. However, when trying to start the server the spawner stucks and after a while I get an error message saying "Spawn failed: Timeout"
in journalctl I can see the following messages:
User logged in: me 302 POST /hub/login?next= -> /hub/spawn (me#::ffff:[my IP address]) 59.42ms
Adding role server to token: <APIToken('93c8...', user='me', client_id='jupyterhub')
Creating oauth client jupyterhub-user-me
pam_loginuid(login:session): Error writing /proc/self/loginuid: Operation not permitted
pam_loginuid(login:session): set_loginuid failed
pam_unix(login:session): session opened for user me by (uid=0)
Failed to open PAM session for me: [PAM Error 14] Cannot make/remove an entry for the specified session
Disabling PAM sessions from now on. user:me
Unit jupyter-me-singleuser in a failed state. Resetting state.
Disclaimer: My Jupyter/Python installation is replacing an former installation that was setup by someone else and got messed up a bit during time. I tried to remove everything related and start with a clean installation from scratch. However, as I had very little documentation about the old setup there is a certain risk that there might be some left-overs of the previous installation that may cause trouble.
Any ideas?
Solved it out myself. In the end the PAM related messages seem to be non-critical and were not related to the timeout at all. Instead I found a mistake in /etc/systemd/system/jupyterhub.service, where the PATH variable was not including the bin directory of my miniconda installation.
I was facing an error with Jupyter notebook to which I could not found a solution so far:
HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='GET', uri='/ipython/api/kernelspecs',
line 1703, in _execute result = await result
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Reference of this, is mentioned here as well [but not much helpful] - I could not update comment there as its closed.
In my case I was using a domain name to point to this Jupyter notebook (thorugh nginx), I didnt realize but I changed to a new domain name (since old one was expired for me) and then this error started occurring. I even have tried passing a new domain name as an argument while restarting the notebook server but the same did not help and was through above error.
--GatewayClient.url='https://newdomain.com'
As a temporary fix, I tried putting in my local dns hosts file a mapping of old domain names pointing to the same IP address and voilla, it fixed the problem. At least it works.
While this is not a fix but locally unblocked me to access and execute my jupyter notebooks.
I will post an update if I could fix it and move to a new domain name.
Thanks,
Hello I'm trying to do web scraping with the python module requests-html to handle dynamic content on the page https://www.monster.com/jobs/search?q=Software+Engineer&where=. My code is:
from requests_html import HTMLSession
url = 'https://www.monster.com/jobs/search?q=Software+Engineer&where='
session = HTMLSession()
response = session.get(url)
response.html.render()
but when I run response.html.render() I get this error
OSError: [WinError 14001] The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail
The first time I ran render() I got
[W:pyppeteer.chromium_downloader] start chromium download.
Download may take a few minutes.
[W:pyppeteer.chromium_downloader]
chromium download done.
[W:pyppeteer.chromium_downloader] chromium extracted to: C:\Users\user\AppData\Local\pyppeteer\pyppeteer\local-chromium\588429
however the file path doesn't exist but pyppeteer is actually an installed package (pyppeteer==0.2.5). Does anyone have an idea what is going on?
You're having this issue because chromium setup failed.
You can either try to reinstall request_html or what I did was switching from the python from the Windows store to the download from the python website and then installing request_html again.
After having everything setup correctly with the downloaded python I switched back to python 3.9 from the store and everything is still working.
I moved a WordPress site from a cpanel server to plesk server. Then, i upgraded manualy the site from 3.5.1 version to 4.8.3. Afterwards i tried to upgrade plugins (fancy box) as well as to intall new plugins (contact form 7).
The issue i have is that i get the following error message "Update Failed: Download failed. Destination directory for file streaming does not exist or is not writable.".
In the server's log file i can see few warning like the following one
mod_fcgid: stderr: PHP Warning: file_exists(): open_basedir restriction in effect. File(/home/dentist/domains/dentist.com.gr/public_html/newsite/wp-content/uploads//easy-fancybox.1.6.2-Vlaovu.tmp) is not within the allowed path(s): (/var/www/vhosts/ggeorgiou.gr/ggeorgiou.work/:/tmp/) in /var/www/vhosts/ggeorgiou.gr/ggeorgiou.work/wd/dentist.com.gr/wp-includes/functions.php on line 2085, referer: http://www.ggeorgiou.work/wd/dentist.com.gr/wp-admin/plugins.php
Finally, note that in "Settings --> Media" menu in "
Store uploads in this folder" field i have put the following path of the current server: "/var/www/vhosts/ggeorgiou.gr/ggeorgiou.work/wd/dentist.com.gr/wp-content/uploads".
Any idea please what is wrong about?
Thank you
From what you posted, your exact error message is "open_basedir restriction in effect". You can read more about how to solve it here How can I relax PHP's open_basedir restriction?
Also,
Assuming you have a backup of the previous version, I would start by restoring that.
Secondly, there are many versions between 3.5.1 and 4.8.3. It is advisable to upgrade in increments of one version at a time. It is long but safer.
I finish installing CKAN in a virtual machine, but there is a problem in CKAN interface when accessing the IP of CKAN website. The CSS style cannot be loaded in CKAN website, so only html part can be displayed.
When using chrome to look at the page source, warning can be seen from console: "Resource interpreted as Stylesheet but transferred with XXX MIME type text/plain: XXX(CSS link)."
From linux terminal, whenever clicking links in CKAN website, python error messages come out: [Errno 32] Broken pipe
Also my CKAN link is set as http://localhost:8773, not sure if port 8773 is a problem. (Port 5000 is used for login in virtual machine)
Other installation information: CentOS 7, CKAN 2.4.1, Tomcat 7.0.69, Solr 1.4.0 PostgreSQL 9.2.18
Thanks a lot!
My CKAN Interface Problem Screenshot
You probably created a file similar to /etc/ckan/default/development.ini while installing ckan, right?
Try setting ckan.site_url = http://localhost:5000/ there.
You could also cross-check different CKAN setup tutorials for stuff that is missing in your setup:
https://yorkhuang-au.github.io/2016/01/08/Install-CKAN-On-Centos7/
https://github.com/ckan/ckan/wiki/How-to-install-CKAN-2.x-on-CentOS-7
Also what I usually do in these cases is search for a vagrant box with a finished setup and use that: https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=ckan