Embed Jupyterlab to website - jupyter-notebook

Can I embed JupyterLab into a website? Currently it throws frame-ancestors self error. Is it possible to change some configuration to allow embedding this in an iframe?

Yes. The default setup of both JupyterHub and single-user Jupyter Notebook/Lab server is configured to prohibit outside domains from embedding the page. As a workaround, you can include your domain in the frame-ancestors directive:
jupyterhub_config.py:
c.JupyterHub.tornado_settings = {
'headers': {
'Content-Security-Policy':
"frame-ancestors 'self' http://yourdomain.com"
}
}
jupyter_notebook_config.py:
c.NotebookApp.tornado_settings = {
'headers': {
'Content-Security-Policy':
"frame-ancestors 'self' http://yourdomain.com"
}
}
If you are using a Kubernetes-based setup for the deployment, the configuration is slightly different and involves writing/building a Dockerfile of the single-user server with these configurations. You can check out my GitHub repo that I created a while back that will walk you through these steps.

Related

Reference outside text file content from Nginx configuration file

I am looking at options to add client-side certificate authentication with a fingerprint whitelist to a local site, and have successfully configured nginx to operate in the intended manner. My configuration is as follows:
# Client Certificate Whitelisting
map $ssl_client_fingerprint $reject {
default 1;
<ALLOWED_FINGERPRINT_1> 0;
<ALLOWED_FINGERPRINT_2> 0;
...
<ALLOWED_FINGERPRINT_N> 0;
}
server {
...
ssl_client_certificate /etc/pki/tls/certs/Private-CA-bundle.pem;
ssl_verify_client on;
...
if ($reject) { return 403; }
...
}
However, I would like to store the fingerprint list in a separate text file, rather than manipulating the nginx configuration file directly each time. Is this possible?
As a bonus, it would be great if I could modify the contents of the text file and have them take effect without reloading nginx. It is acceptable for removals to still require a service restart or other manual session teardown procedure.
---- EDIT ----
Based on the accepted answer, I was able to get this working.
The updated configuration file is:
# Client Certificate Whitelisting
map $ssl_client_fingerprint $reject {
default 1;
include /etc/nginx/cert-whitelist;
}
I was able to add a new certificate and apply the changes without a full service restart.
### Attempt connection with client certificate; returns 403 Forbidden
[root]# cat /run/nginx.pid
5606
[root]# echo "${FINGERPRINT} 0;" >> /etc/nginx/cert-whitelist
[root]# kill -1 $(cat /run/nginx.pid)
[root]# cat /run/nginx.pid
5606
### Attempt connection with client certificate; success
The map directive has the ability to source a correctly formatted file. See this document for details.
You can use SIGHUP to re-read the configuration file without restarting Nginx. See this document for details.

Cannot force Firebase Realtime Database to use websockets only - Long polling blocked by CSP

I am busy creating a Google chrome extension, and under the new rules of manifest V.3, you are no longer allowed to run any remote script and this gets blocked by the Content Security Policy (CSP).
Firebase Realtime Database has two ways of communicating. Websockets being the primary, and long polling being the fallback or secondary.
The way it usually works is that if a Websockets connection fails, it reverts to long-polling, if long polling is successful it goes back to Websockets. But if your CSP is blocking the long poll, I am then stuck and the app can never connect again.
Refused to load the script '<URL>' because it violates the following Content Security Policy directive: "script-src 'self'". Note that 'script-src-elem' was not explicitly set, so 'script-src' is used as a fallback.
2.b786d402.chunk.js:2 Refused to load the script 'https://xxxx-default-rtdb.europe-west1.firebasedatabase.app/.lp?start=t&ser=75721928&cb=1&v=5&p=1:592645519845:web:db72abd212b7364c72170c&ns=tonews-default-rtdb' because it violates the following Content Security Policy directive: "script-src 'self'". Note that 'script-src-elem' was not explicitly set, so 'script-src' is used as a fallback.
I can revert this by removing the extension and re-installing it.
I have found some posts online, where its mentioned I could just change my configuration.
The solution recommends just changing 'databaseURL' field in the firebase config to start with wss:// instead of https://...
This works, for about a week or so, but thereafter I get stuck in CSP prison again..or does it? Hard to tell since reinstalling the extension solves the problem anyway, and is required to apply the new changes..
"firebase": "^9.6.1",
"name": "xxxx",
"description": "xxxxxxxxxxx",
"version": "1.0",
"manifest_version": 3,
"action": {
"default_popup": "index.html",
"default_title": "Open the popup"
},
"chrome_url_overrides": {
"newtab": "index.html"
},
"content_security_policy": {
"extension_pages": "script-src 'self'; object-src 'self'",
"sandbox": "sandbox allow-scripts; script-src 'self' 'https://apis.google.com/' 'https://www.gstatic.com/' 'https://*.firebaseio.com' 'https://*.firebasedatabase.app' 'https://www.googleapis.com' 'https://ajax.googleapis.com'; object-src 'self'"
},
"icons": {
"16": "favicon-16x16.png",
"48": "favicon-32x32.png",
"128": "android-chrome-192x192.png"
}
}
I'm honestly stuck and very frustrated because even if I do find a fix, it's hard to verify it..

Varnish + Nginx proxy configuration on plesk

I followed the official tuto for the Varnish via Docker configuration on plesk. https://www.plesk.com/blog/product-t...cker-container
i have a VPS Ubuntu with plesk and many domains.
I followed all steps :
I created a domain test.monserveur.com
I use the Docker image million12/varnish
On the Docker container setting, the mapping redirect the 80 port to the 32780
On plesk for the hosting parameters, the option “SSL/TLS support” and “Permanent SEO-safe 301 redirect from HTTP to HTTPS” are deactivated
I deactived also the security mod for this domain
On the proxy rules of the docker container (/etc/varnish/default.vcl), i put fo the .host test.monserveur.com and .port 7080
On the function sub vcl_deliver, i put :
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
I still have a 503 page with a MISS on the header for the page on test.monserveur.com
I can't understand where is the problem. I tried to put on the .host the serveur IP and with a link to another domain of the server. I think it's a problem with a setting but i don't know where.
Thanks in advance
A 503 error response from Varnish means that your Docker container is not configured properly. You should check whether the container and Varnish within the container are running properly. Additionally, the configuration file must have valid syntax and the correct port and IP address of the server have to be set in the configuration file.
Without knowing what you've entered, I cannot give you a better advice! If you follow the tutorial completely, it will work. I've created over 10 working instances while I wrote the text!
PS: Please use the official Plesk forum with more information (also add your configuration file) if you still cannot solve your problem - https://talk.plesk.com/
Have success!

Modify x-frame-options in apache2

I want to use an iframe in my localhost web server (wamp).
This iframe loads a form from a remote web server.
I have access to the remote web server, it uses apache2 (https://help.ubuntu.com/lts/serverguide/httpd.html), and i modify its security.conf file and i load the module 'headers'.
I modify security.conf by this line (the ip is the ip of my local computer):
Header append X-Frame-Options "ALLOW-FROM http://localhost, http://172.18.48.120, 172.18.48.120"
But when i test the changes, always says the same:
Refused to display 'http://externalURL.net/form.php' in a frame because it set 'X-Frame-Options' to 'sameorigin'.
Any idea? Where's the problem?
Just for completeness:
Here are the lines to add to your apache2/conf-available/security.conf file to make your iframed content available in browsers supporting either or both X-Frame-Options and Content-Security-Policy header options (as stated on this survey site)
Header set X-Frame-Options: "ALLOW_FROM https://www.example.com"
Header set Content-Security-Policy: "frame-ancestors
https://www.example.com"
Make sure that header module is enabled
a2enmod headers
restart apache
service apache2 restart
That's it !
Finally i solved it, the solution is:
Load module headers in apache2.
Modify file security.conf, you have to append this line:
Header set X-Frame-Options 'ALLOW-FROM http://externalURL.net'
(it's a valid option if you use a local web server, e.g. wamp:
Header set X-Frame-Options 'ALLOW-FROM http://localhost')
Reload service apache2.
If you want to test it, don't use Google Chrome, it ignores x-frame-options directive and it always says the same message:
Refused to display 'http://externalURL.net/form.php' in a frame because it set 'X-Frame-Options' to 'sameorigin'.
It's ok with Firefox.

nginx + ssi + remote uri access does not work

I have a setup where my nginx is in front with apache+PHP behind.
My PHP application cache some page in memcache which are accessed by nginx directly except some dynamic part which are build using SSI in Nginx.
The first problem I had was nginx didnt try to use memcache for ssi URI.
<!--# include virtual="/myuser" -->
So I figured that if I force it to use a full URL, it would do it.
<!--# include virtual="http://www.example.com/myuser" -->
But in logs file (both nginx and apache) I can see that a slash has been added at the beginning of the url
http ssi filter "/http://www.example.com/myuser"
In the source code of the SSI module I see a PREFIX that seems to be added, but I can really tell if I can disable it.
Anybody got this issue?
Nginx version : 0.7.62 on Ubuntu Karmic 64bits
Thanks a lot
You can configure nginx to include remote URLs despite you cannot refer them directly in SSI instructions. In site config create location with local path and named remote location that points where you want to. For example:
server {
....
location /remote {
proxy_pass #long_haul; # or use "try_files" to provide fallback
}
location #long_haul {
proxy_pass http://porno.com;
}
....
}
and in served html use include directive that refers /remote path:
<!--# include virtual="/remote/rest-of-url&and=parameters" -->
Note that you may customize URL that is passed further with variables and regexp. For example:
location ~/remote(.+) {
proxy_pass #long_haul$1?$args;
}
It has nothing about nginx, you just can't do that. SSI doesn't accept remote uri. you can only specify a local file path.
See
http://en.wikipedia.org/wiki/Server_Side_Includes

Resources