How to generate fix url with SSL/TLS with ngrok - ngrok

I am running an apache server with a virtual host, called: test.net.ngrok.io.
What i would like to get is to make my virtual accessible for public with a ssl or tls certificate.
I have the pro package, see: https://ngrok.com/pricing
So far I managed to run this command, but without ssl:
./ngrok http -region=us -hostname=test.net.ngrok.io -host-header=rewrite test.net.ngrok.io:80.
Is there a way to make a fixed public url with ssl|tls certificate ? If yes, can you help me with the exact command to make the ngrok tunnel ?
Thank you

Related

Connect to a remote Jupyter runtime over HTTPS with Google Colab

I'm trying to use Google's Colab feature to connect to a remote run-time that is configured with HTTPS. However, I only see an option to inform the port on the UI, not the protocol.
I've checked the Network panel and the website starts a WebSocket connection with http://localhost:8888/http_over_websocket?min_version=0.0.1a3, HTTP-style.
Full details of my setup:
I have a public Jupyter server at https://123.123.123.123:8888 with self-signed certificate and password authentication
I've followed jupyter_http_over_ws' setup on the remote
I started the remote process with jupyter notebook --no-browser --keyfile key.pem --certfile crt.pem --ip 0.0.0.0 --notebook-dir notebook --NotebookApp.allow_origin='https://colab.research.google.com'
I've created a local port forwarding with ssh -L 8888:localhost:8888 dev#123.123.123.123
I've turned on network.websocket.allowInsecureFromHTTPS on Firefox
I've went to https://localhost:8888 and logged in
Naturally, when the UI calls http://localhost:8888/http_over_websocket?min_version=0.0.1a3 it fails. If I manually access https://localhost:8888/http_over_websocket?min_version=0.0.1a3 (note the extra s) it gets through.
I see three options to solve it:
Tell the UI to use secure WS connection
Run a proxy on my local machine to transform the HTTPS into plain HTTP
Turn off HTTPS on my remote
The last two I think will work, but I wouldn't like that way.
How to do #1?
Thanks a lot!
Your option 1 isn't possible in colab today.
Why do you want to use HTTPS over an SSH tunnel that already encrypts forwarded traffic?

Cannot trickle ICE server using external IP, Coturn server in Ubuntu

I have setup Coturn server from Url https://www.webrtc-experiment.com/docs/TURN-server-installation-guide.html#coturn in Ubuntu.
Turnserver is working fine using local-ip, but when I try to trickle using exernal-ip I get error Not reachable?
If I access turnserver from browser url I can access it using external-ip. I get message.
TURN Server
https admin connection
To use the HTTPS admin connection, you have to set the database table _admin_user_ with the admin user accounts.
My turnserver.conf looks like:
user=test:test123
listening-port=3478
tls-listening-port=5349
listening-ip=192.168.22.101
relay-ip=192.168.22.101
external-ip=202.137.12.10
realm=yourdomain.com
server-name=yourdomain.com
lt-cred-mech
userdb=/etc/turnuserdb.conf
cert=/etc/ssl/my-certificate.pem
pkey=/etc/ssl/my-private.key
no-stdout-log
I am starting turn server using command:
sudo turnserver -a
And I try to trickle using below format:
turn:202.137.12.10:3478[test:test123]
Trickle: https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
Please tell me where I am going wrong.
I got what was wrong, it turned out to be that UDP port 3478 was blocked. Also I was able to get trickle if I used TCP protocol (turn:?transport=tcp[username:password])

Accessing Nexus from Eclipse/M2E through httpd with LDAP requirement

Recently, I configured a Nexus repo on a corporate server at https://mycorporateserver.corporation.com/nexus/.
The way "its always been done" is to put our "apps" on the server and use apache httpd to serve the pages and manage access using ldap.
Nexus is configured for anonymous access, https, localhost only (all works fine). Then, we used Apache httpd to serve that Nexus page/URI to others using proxypass and reverseproxypass (per instructions in sonatype's documentation).
The catch is that the httpd configuration requires ldap. So, if I hit the given Nexus URI from a web browser, the browser asks for my corporate login. I log in with my user name and password and can view the repository as an anonymous user just fine.
I did not configure Nexus for ldap, Nexus provided me read-only anonymous access combined with the ability to log in as an admin from the login menu.
Great. The problem (not surprising) is when Eclipse/M2E tries to contact the Nexus repository I get:
"could not transfer artifact 'myartifact' from/to nexus (https://mycorporateserver.corporation.com/nexus/): handshake alert."
In my settings.xml, I included
<servers>
<server>
<id>tried many different versions of the server name including full URI</id>
<username>username</username>
<password>password</password>
<server/>
<servers/>
but that doesn't seem to work - which I think makes sense since I'm not trying to login to Nexus but rather supply my credentials to ldap.(?)
In M2E/Eclipse, is there a way to provide the needed LDAP information?
Is it better to not let httpd manage access but configure Nexus to handle everything LDAP? Is there a better/different way to configure Nexus/httpd/LDAP/Eclipse to solve the problem?
Thanks for all pointers and guidance!
"could not transfer artifact 'myartifact' from/to nexus
(https://mycorporateserver.corporation.com/nexus/): handshake alert."
That's an SSL handshake problem, the Java running Eclipse does not consider the certificate installed on Nexus to be valid. This is almost certainly because either:
The certificate is self signed.
The certificate has been signed by a
private certificate signing authority which is not in the truststore
of the Java running Eclipse.
Either way the workaround is to install the certificate on Nexus into the trust store of the java running Nexus.
See here for more information:
https://support.sonatype.com/hc/en-us/articles/213464948-How-to-trust-the-SSL-certificate-issued-by-the-HTTP-proxy-server-in-Nexus
Ultimately, as I understand it, it was a mismatch between how the VirtualHost and ServerName were defined in the apache httpd configuration.
https://mycorporateserver.corporation.com/nexus/ was the ServerName but the VirtualHost was defined with the ip and port https://mycorporateserver.corporation.com:port.
Original
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com/nexus/
...ldap and proxy pass configs
</VirtualHost>
Since we have more than one virtual host containing this ip and port combination, the server looked further into the configuration to find the proper page by reading the ServerName. Since no ServerNames matched what the clients sent, the handshake error occurred.
https://httpd.apache.org/docs/current/vhosts/name-based.html
Changing ServerName in the httpd conf to include the port solved the handshake error.
Final
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com:port/nexus/
...ldap and proxy pass configs
</VirtualHost>
(I'm by no means an apache httpd expert, still want to find out if there is a way to do all this without showing the port in the URL)
Then, when sending a request from Eclipse/M2E to the server, the response was "Unauthorized"
Adding the nexus server plus username and password to settings.xml solved the authorization problem and all worked great!
<servers>
<server>
<id>nexus</id>
<username>username</username>
<password>password</password>
<server>
</servers>
To ensure passwords were not stored in plain text, instructions at this Maven site were used to create encrypted passwords: https://maven.apache.org/guides/mini/guide-encryption.html
In hindsight, the question probably could have been asked better/differently but I didn't yet know what I learned today.

GitLab not working w/ Nginx

I am trying to get GitLab setup with my current installation of Nginx but I keep getting an Error 502. I have included my configuration files, and not sure what I am doing wrong. But I followed the "Using a non-bundled web-server" steps on https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md
/etc/nginx/conf.d/gitlab-omnibus-nginx.conf
http://pastebin.com/bQ8eCiNh
/etc/gitlab/gitlab.rb
http://pastebin.com/Lw5tjwXy
HTTP 502 means "The server was acting as a gateway or proxy and received an invalid response from the upstream server." So there are two possibilities here.
Your Gitlab server is not actually working or is returning an invalid response. After starting the Gitlab server, use sudo netstat -plnt and make sure it is running on a port and note the port. Then connect directly to this port in your browser (or from the CLI on the server if necessary) and confirm that Gitlab is working fine without a proxy in front of it. If Gitlab is running on a socket and not a port, there are also tools to test HTTP servers through socket connections that you can use.
Nginx is not configured correctly to connect to Gitlab. In this case, check your Nginx error log to see if there is any more detail besides the "502" error.

docker registry on localhost with nginx proxy_pass

I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.

Resources