How to run jupyter notebook behind haproxy and make its content public? - jupyter-notebook

How to run jupyter notebook behind haproxy? I tried to find an answer on the Internet but there are solutions only for Ngix and Apache and I think that using haproxy as a reverse proxy can be even a simple solution that does not involve creating special Virtual servers.

The following solution run successfully and also does load balancing.
Assuming that you have a site jupyter.example.com the following code inserted in the /etc/haproxy/haproxy.cfg will solve your problem of making the jupyter notebook public:
backend jupyter
option forwardfor
http-request set-header X-Client-IP %[src]
reqrep ^([^\ :]*)\ /mez/(.*) \1\ /\2
reqadd X-Script-Name:\ /jupyter
option http-server-close
server Server12 10.0.0.12:8888 weight 40 check
server Server14 10.0.0.14:8888 weight 20 check

Related

Accessing GraphDB Workbench throught the internet (not localhost) in a Nginx server

I have GraphDb (standlone server version) in Ubuntu Server 16 running in localhost (with the command ./graphdb -d in /etc/graphdb/bin). But I only have ssh access to the server in the terminal, can't open Worbench in localhost:7200 locally.
I have many websites running on this machine with Ningx. If I try to access the machine's main IP with the port 7200 through the external web it doesn't work (e.g. http://193.133.16.72:7200/ = "connection timmed out").
I have tried to make a reverse proxy with Nginx with this code ("xxx" = domain):
listen 7200;
listen [::]:7200;
server_name sparql.xxx.com;
location / {
proxy_pass http://127.0.0.1:7200;
}
}
But all this fails. I checked and port 7200 is open in firewall (ufw). In the logs I get info that GraphDB is working locally in some testes. But I need Workbench access to import and create repositories (not sure how to do it or if it is possible without the Workbench GUI).
Is there a way to connect through the external web to the Workbench using a domain/IP and/or Nginx?
Read all the documentation and searched all day, but could not find a way to deal with this sorry. I only used GraphDB locally (the simple installer version), never used the standalone server in production before, sorry.
PS: two extra questions related:
a) For creating an URI endpoint it is the same procedure?
b) For GraphDB daemon to start automattically at boot time (with the command ./graphdb -d in graph/bin folder), what is the recommended way and configuration? (tryed the line "/etc/graphdb/bin ./graphdb -d" in rc.local but it didn't worked).
In case it is usefull for someone, I was able to make it work with this Nginx configuration:
server {
listen 80;
server_name sparql.xxxxxx.com;
location / {
proxy_pass http://localhost:7200;
proxy_set_header Host $host;
}
}
I think it was the "proxy_set_header Host $host;" that solved it (tried the rest without it before and didn't worked). I think GraphDB uses some headers to set configurations, and they were not passing.
I wounder if I am forgetting something else important to forward in the proxy, but in this moment the Worbench seams to work and opens in the domain used "sparql.xxxxxx.com".

Ubuntu + nginx - trying to install GeoIP module

I'm using vagrant (VVV actually) to run local wordpress installs. I want to test different behaviors for different GEO's on my local machine instead of upload it every time to the server which is annoying.
So, I've tried to install the GeoIP nginx module to the local machine with the following guide https://piwik.org/faq/how-to/faq_166/ (and a bit more google but it doesn't matter at the moment).
When I'm using ./configure the following is exists:
checking for GeoIP library ... found
checking for GeoIP IPv6 support ... found
I've also set the .dat files in my conf file, and set the $_SERVER (fastcgi_param) parameters - so they displayed when I'm printing the $_SERVER var.
But those GeoIP vars are empty. I'm not sure about the reason, but 2 things is bothering me. First, when I'm write nginx -V in the terminal the argument --with-http_geoip_module is missing. Second, could it actually works if the REMOTE_ADDR (IP) is not my real IP? (192.168.1.50 for example).
nginx is a bit strange for me, so sorry if something isn't exact..
--
Operating system - macOS, nginx version - 1.3.15, running with VVV (vagrant box)
If there is a reverse proxy in front of your nginx, use geoip_proxy to set IPs whose X-Forwarded-For-Header can be trusted.
You can also use that without actually having a reverse proxy when you're developing. Add your local IP to the geoip_proxy-list and set the X-Forwarded-For-Header to your public IP in your browser (use a plugin like Modify Headers).

502 gateway error with meteor, browser policy, HTTP connecting to S3

I am using meteor with the BrowerPolicy package and Meteor Up with the abernix/meteord:base docker image to deploy my app to a EC2 instance. I use HTTPS using nginx all on the same server. The trouble comes when I allow connections to an AWS S3 bucket using the following line:
BrowserPolicy.content.allowOriginForAll('*.s3-us-west-2.amazonaws.com');
It works locally but when I deploy to the EC2 server, I get a 502 bad gateway error for the entire app.
I have read that this problem can sometimes be due to the header size being too large and that it can be fixed by changing proxy_buffer_size 8k; in the /var/lib/docker/aufs/mnt/CHECKEDID/opt/nginx/conf/nginx.conf file. I checked and my header size is 499 for a random svg that I have S3.
If indeed I need to make a change to the docker image to have this larger header size, how do I do that? I believe that this is the source repo for the docker image. If I am totally off base and there is a different problem, please let me know that too.
Thanks!
I ended up figuring it out. So it turns out to be a configuration error with nginx. I configured my EC2 instance using this guide. In order to fix nginx, I first logged into my cluster and opened this file:
sudo vi /etc/nginx/sites-available/default
I then added the proxy_buffer_size 8k; line to the server block of the configuration file. Finally, I checked the syntax with sudo nginx -t and restarted nginx nginx restart. That was it!
The best part is that since I configured my nginx instance manually and deploy my meteor instance on top of this running on port 3000, these settings persist even after I deploy new versions of my app.

Gitlab ports 80 & 8080 taken by a separate Gitlab instance?

I have Gitlab 8.6 running on an Ubuntu 14.04 server that seems to have gotten messed up. I consistently get a 502 error when accessing the site. The server likely has not been restarted since installing Gitlab initially, and a power outage caused the server to reboot. Now, I cannot start/restart Gitlab due to what appears to be port conflicts.
I installed Gitlab via source, I don't have any custom port configurations, and am using NGINX. nginx -t shows that the configuration appears to be correct syntax-wise.
When I run netstat -tupln, I see that Unicorn & a Gitlab instance is already running on :8080 and :80 respectively at boot up. I suspect that a 2nd instance of Gitlab was installed which is being run at boot and that is causing the proper instance to have port conflicts when I try to run it via service gitlab restart. I'm not even sure if that's possible, but I can't seem to figure out where to go from here. Every time I run sudo gitlab-ctl reconfigure or service gitlab start, it fails and the unicorn.stderror.log shows bind errors to the :8080 port. I tried moving the Unicorn service to :8081 as well, but I still receive the port binding error.
Does anyone know how I can detect if there are multiple Gitlab instances running, and maybe if there is a way to remove a duplicated one if it's possible? Thank you!
EDIT: Here is what is in the /etc/gitlab/gitlab.rb file. Everything else is commented out.
## Url on which GitLab will be reachable
external_url 'http://my-gitlab-instance.domain.com'
EDIT 2: My /home/git/gitlab/ directory is mapped to https://gitlab.com/gitlab-org/gitlab-ce.git, and is on the 8-7-stable branch. gitlab-shell and gitlab-workhorse are on the correct versions according to https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/8.6-to-8.7.md
EDIT 3: I have gotten to a point where the Gitlab seems to self-check okay by removing the gitlab-ce package (https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135), but the server returns a 404. NGINX, Unicorn, Sidekiq, and gitlab-workhorse all say that they're running. I see that unicorn.rb is listening on :8080, and nginx is listening on 0.0.0.0:80 and :::80. I guess now I'm troubleshooting this 404 and hopefully I will be back to my install-from-source.
What I have found is that there were 2 issues causing the errors I had.
First, I removed a "gitlab-ce" package that was installed, following the instructions here: https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135. For some reason, when I restart the machine now I have to restart these services, in order, for Gitlab to run properly redis-server, gitlab, nginx. However, Gitlab does start responding properly after that.
Second, the 404 error was due to a different server that was also listening on that IP address, causing a conflict.
I will likely move to using the omnibus package on a fresh, new server going forward, but at least the immediate issues appear resolved. Thanks for your help, SLY!

Getting "connection refused" when trying to access etcd from within a Docker container

I am trying to access etcd from within a running Docker container. When I run
curl http://172.17.42.1:4001/v2/keys
I get
curl: (7) Failed to connect to 172.17.42.1 port 4001: Connection refused
I have four other hosts where this works fine, but every container on this machine has this problem. I'm really at a loss as to what's going on, and I don't know how to debug it.
My etcd environment variables are
ETCD_ADVERTISE_CLIENT_URLS=http://10.242.10.2:2379
ETCD_DISCOVERY=https://discovery.etcd.io/<token_removed>
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.242.10.2:2380
ETCD_LISTEN_CLIENT_URLS=http://10.242.10.2:2379,http://127.0.0.1:2379,http://0.0.0.0:4001
ETCD_LISTEN_PEER_URLS=http://10.242.10.2:2380
I can also access etcd from the host with
curl http://localhost:4001/v2/keys
So there seems to be some error when routing from the container out to the host. But I can't figure out what it is. Can anyone point me in the right direction?
I observed I had to use the --advertise-client-urls and --listen-client-urls. Like so:
./etcd --advertise-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001' --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
Then I was able to successfully do
curl -L http://hostname:2379/version
from any machine that could reach that server and it worked.
It turns out etcd was only listening on localhost:4001 on that machine, which is why I couldn't access it from within a container. This is despite me configuring one of the listen client urls to 0.0.0.0:4001.
It turns out that I had run sudo systemctl enable etcd2, which caused it to run before the cloud-config service ran. As such, etcd started with default configuration instead of the one that I had specified in my cloud config. Running sudo systemctl disable etcd2 fixed the issue.

Resources