how-to create an insecure jupyter server - jupyter-notebook

Jupyter only allows access from localhost unless I do a bunch of extra security stuff. I am running my server so that it is only accessible on a local network where anyone with access is equal in trustworthiness to localhost. How do I set up a jupyter notebook server with no extra security features?

Based on your question, I expect you want this configuration (in ~/.jupyter/jupyter_notebook_config.py):
c.NotebookApp.ip = '0.0.0.0' # listen on all IPs
c.NotebookApp.token = '' # disable authentication
There are a few security features in Jupyter (as of 4.3.1). I'll go over how to disable each one, and whether/when it makes sense to disable it:
It listens only on localhost. This can be changed to all public IP addresses:
c.NotebookApp.ip = '0.0.0.0'
Listening on public IPs should generally come with enabling HTTPS and/or password or token authentication (docs). If it's all internal on a trusted network where nothing bad ever happens, you can proceed to disable other security features:
Token authentication is enabled by default. To disable it:
c.NotebookApp.token = ''
Disabling authentication means that anyone with access to the host can run code. It seems like this is what you want. You can also enable a password:
In [1]: from notebook.auth import passwd
In [2]: passwd()
Enter password:
Verify password:
Out[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'
You can store this in c.NotebookApp.password.
You can also store this password in (~/.jupyter/jupyter_notebook_config.json):
{
"NotebookApp": {
"password": "sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed"
}
}
Jupyter also has CORS protections, to avoid other websites from being able to access this server. This means that when a user on your network visits example.com, javascript on that page cannot execute code on your notebook server. It sounds like you don't want to touch this, but if you are running a service that should be able to access the notebook server, you can add it to:
c.NotebookApp.allow_origin = 'https://your.other.host'
Finally, Jupyter 4.3.1 introduces an xsrf token, which is part of dealing with the same category of cross-site execution above. You don't need to touch this if users are only accessing the server directly, rather than through javascript on additional websites.
c.NotebookApp.disable_check_xsrf = True
A completely insecure notebook server, which is to say one where any website can run code on it, as long as a browser can connect to its host (this would include localhost or LAN if the browser is running from inside the LAN):
c.NotebookApp.ip = '0.0.0.0' # listen on all IPs
c.NotebookApp.token = '' # disable authentication
c.NotebookApp.allow_origin = '*' # allow access from anywhere
c.NotebookApp.disable_check_xsrf = True # allow cross-site requests
This might be desirable if you are aiming to make compute resources free for the world to use however they want via the notebook API.

Related

keycloak starts with a new realm and some client configurations

I try to use keycloak as the authentication service in my design. In my case, when the keycloak starts, I need one more realm besides default master realm. Assuming the new agency is called "demo".
So it means when keycloak starts, it should have two realms (master and demo).
In addtion, in the realm demo, I need to configure the default client "admin-cli" to enable "Full Scope Allowed". Also need to add some buildin mapper to this client.
In this case, I wonder whether I can use something like initialization file which keycloak can load when starting ?
Or I need to use keycloak client APIs to do this operations (e.g., Java keycloak admin client)?
Thanks in advance.
You can try the following:
Create the Realm;
Set all the options that you want;
Go to Manage > Export;
Switch Export groups and roles to ON;
Switch Export clients to ON;
Export.
That will export a .json file with the configurations.
Then you can tested it be deleting your Demo Realm and:
Go to Add Realm;
Chose the .json file that was exported;
Click Create.
Check if the configurations that you have changed are still presented on the Demo Realm, if there are then it means that you can use this file to import the Realm from. Otherwise, for the options that were not persistent you will have to create them via the Admin Rest API.

How to setup https when developing localy with webpack and hosting on Azure in Docker container running ASP.NET Core

I am hosting on Azure and have it configured to only allow https. The backend is running ASP .NET Core in a Linux container. The webserver (Kestrel) is running without https enabled. I've configured Azure TLS/SSL settings to force https, so when users connect from the public internet, they have to use https. I have a cert that is signed by a cert authority and it's configured in the Azure App Service -> TLS/SSL -> Bindings settings.
However in my local development environment I've been running webpack using http. So when I test I connect to localhost:8080 and this is redirected to localhost:8085 by webpack. localhost:8085 is the port Kestrel is listening on. I've decided I want to develop locally using https so that my environment mimics the production environment closely. To this I've started the webpack-dev-server with the --https command line option, and ammended my redirects in my webpack.config.js
For example:
'/api/*': {
target: 'https://localhost:' + (process.env.SERVER_PROXY_PORT || "8085"),
changeOrigin: true,
secure: false
},
This redirects https requests to port 8085.
I've created a self-signed cert for use by Kestrel when developing locally. I modified my code to use this certificate as shown below:
let configure_host (settings_file : string) (builder : IWebHostBuilder) =
//turns out if you pass an anonymous function to a function that expects an Action<...> or
//Func<...> the type inference will work out the inner types....so you don't need to specify them.
builder.ConfigureAppConfiguration((fun ctx config_builder ->
config_builder
.SetBasePath(Directory.GetCurrentDirectory())
.AddEnvironmentVariables()
.AddJsonFile(settings_file, false, true)
.Build() |> ignore))
.ConfigureKestrel(fun ctx opt ->
eprintfn "JWTIssuer = %A" ctx.Configuration.["JWTIssuer"]
eprintfn "CertificateFilename = %A" ctx.Configuration.["CertificateFilename"]
let certificate_file = (ctx.Configuration.["CertificateFilename"])
let certificate_password = (ctx.Configuration.["CertificatePassword"])
let certificate = new X509Certificate2(certificate_file, certificate_password)
opt.AddServerHeader <- false
opt.Listen(IPAddress.Loopback, 8085, (fun opt -> opt.UseHttps(certificate) |> ignore)))
.UseUrls("https://localhost:8085") |> ignore
builder
This all works, and I can connect to webpack locally and it redirects the request to the webserver using https. The browser complains that the cert is insecure because it's self-signed but that was expected.
My question is how should this be setup in the production environment. I don't want to be running the container on azure with the certificates I created locally embeded in the image. In my production environment should I be configuring Kestrel, as I have done with the localhost code, to use the cert in loaded into Azure (as mentioned in the 1st paragraph)? Or is simply binding it to the domain using the portal and forcing https via the Web UI enough?
On Azure, If you have the PFX certificate you can choose to upload the certificate:
see this image
However, this certificate needs to come from a trusted certificate authority.
If the URL is a subdomain, you can choose a Free App Service Managed Certificate.
After, that all you need to do is enable https only in the portal.
If its a naked domain and you really need the certificate to be free, you can get a certificate from sslforfree.com. sslforfree will give you the .cer file and the private key you will need to generate a pfx.

Telegraf - how to monitor multiple Tomcat instances?

I managed to gather data from single Tomcat instance to Telegraf as follows.
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:19090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
## Request timeout
# timeout = "5s"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
Now, I want to monitor multiple Tomcat instances, but there does not seem to be an example of how to monitor multiple. Does anybody know?
The answer turned out to be very simple. Just declare the inputs.tomcat block multiple times as follows.
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:19090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
[[inputs.tomcat]]
## URL of the Tomcat server status
url = "http://127.0.0.1:29090/manager/status/all?XML=true"
## HTTP Basic Auth Credentials
username = "admin"
password = "fD*(*DSS"
So as far as I recall there are couple of ways.
1) Easiest way is to create, use and try via using different configuration files where you may create tomcat1.conf place it under /etc/telegraf/telegraf.d/tomcat1.conf folder where you'd end up using the same plugin that you have mentioned above (inputs.tomcat) and similarly, create another configuration file for tomcat2.conf etc.. for all Tomcat instances. This way you may be able to monitor multiple Tomcat instances. See if that helps! Con of this approach is, you have to create N no. of tomcatXX.conf files under telegrad.d folder (Which can be easily fixed if you create these files on the fly while provisioning a machine using Ansible/similar tools - templating the file and iterating over the tomcatXX list).
2) Other way, which which may help as well using just one configuration file.
In one configuration file, use the following plugins together to capture what you are looking for. PS: If you use inputs.exec plugin, then the output you'll generate from your custom script (which you'll call in inputs.exec plugin) must generate the output in a known format (InfluxDB/Line Protocol) that Telegraf and InfluxDB can understand / store or you'll see some minor errors for which you can see few of my posts.
exec plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
http_* plugin (especially http_response): https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
filestat plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat
logparser plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser
procstat plugin: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat
Look at the plugin links mentioned above for what they do and how to set them up in Telegraf and that'd get you most of what you are looking at if you don't want to have multiple conf files for each Tomcat instance.
https://github.com/influxdata/telegraf/tree/master/plugins/inputs contains all input plugins (see if there are some that you may be interested in).
See if you can utilize how to use prefix property efficiently to distinguish between various metrics/events coming from using these plugin(s).

Symfony2 Using Amazon Load Balancers and SSL: Error on isSecure() Check

Hi I'm running into an issue where Symfony2 doesn't recognize the load balancer headers from Amazon AWS, which are need to determine if a request is SSL or not using the requires_channel: https security configuration.
By default Symfony2 $request->isSecure() looks for "X_FORWARDED_PROTO" but there's apparently no standard for this, and Amazon AWS load balancers use "HTTP_X_FORWARDED_PROTO".
I see the cookbook article for setting trusted proxies in config, but that's geared around whitelisting specific IP addresses and won't work with AWS, which generates dynamic IPs. Another feature, setting the framework config to include trust_proxy_headers: true is deprecated. This breaks my app by forcing endless redirects on the pages that require SSL-only.
You can now change the headers using setTrustedHeaderName(). This method allows you to change the four headers used throughout the file.
const HEADER_CLIENT_IP = 'client_ip'; // defaults 'X_FORWARDED_FOR'
const HEADER_CLIENT_HOST = 'client_host'; // defaults 'X_FORWARDED_HOST'
const HEADER_CLIENT_PROTO = 'client_proto'; // defaults 'X_FORWARDED_PROTO'
const HEADER_CLIENT_PORT = 'client_port'; // defaults 'X_FORWARDED_PORT'
The above, taken from the Request class indicate the keys available for use with the aforementioned method.
// $request is instance of HttpFoundation\Request;
$request->setTrustedHeaderName('client_proto', 'HTTP_X_FORWARDED_PROTO');
That said, at the time of writing, using "symfony/http-foundation": "2.5.*" the below code correctly determines whether or not the request is secure whilst behind an AWS Load Balancer.
// All IPs (*)
// $proxies = [$request->getClientIp()];
// Array of CIDR pools from load balancer
// EC2 -> Network & Security -> Load Balancers
// -> X -> Instances (tab) -> Availability Zones
// -> Subnet (column)
$proxies = ['172.x.x.0/20'];
$request->setTrustedProxies($proxies);
var_dump($request->isSecure()); // bool(true)
You're right the X_FORWARDED_PROTO header is hardcoded into HttpFoundation\Request while - as far as i know - overriding the request class in symfony is currently not possible.
There has been a discussion/RFC about this topic here and there is an open pull-request that solves this issue using a RequestFactory.

Deny access if the client is using a different SSL certificate

I have scoured this forum the best I could but found no plausible answer, google was no help either.
I have a FLEX 3 application using AMFPHP over HTTPS (Flex RemoteObject). I would like to prevent the client from making any HTTPS requests if the browser client SSL cert is not the one provided by my server, thus making it more difficult for Charles, Burp, etc. to read the data going to the server by proxying the connection.
When someone uses one of these proxy server there is a certificate error as i.e. Charles provides its own cert to the browser and makes the HTTPS connection to the server as a normal client, so on the server end there is no difference.
Is there any way to only allow connections if my cert is the one being used at the client?
Using SecureSockets (needed to upgrade to SDK 4.6) I was able to check the validity of the SSL certificate the browser was using.
The default behavior is that any incorrect cert (analog to the browser cert warning) will cause SecureSockets to fail. This makes creating a check very easy using the sample code in the Adobe documentation:
private var secureSocket:SecureSocket = new SecureSocket();
public function SecureSocketExample()
{
secureSocket.addEventListener( Event.CONNECT, onConnect )
secureSocket.addEventListener( IOErrorEvent.IO_ERROR, onError );
try
{
secureSocket.connect( "ip address here", 443 );
}
catch ( error:Error )
{
trace ( error.toString() );
}
}
Doc is here
Adding this to the creationComplete listener is enough to make sure the user hasn't followed an insecure cert or man in the middle. The rest of the application's communication can occur over "normal" SSL AMF channel.
One generic approach could be to use Strict Transport Security

Resources