Access dokku app from same external port without configuring vhost - dokku

I have a dokku app configuring without a VHOST and the external port number changes upon each new deployment. How can I get access to a consistent external port? I noticed the internal port is consistent so I just need some direction on how to accomplish this.

Related

When I run my daemon the service is a http proxy instead off http

I am currently running a service with systemctl, and it is running as an http proxy, not normal http. Is this something that Google does? I am using port 8080 and I can't connect to it via http. My daemon is using port 8080, while using the type http-proxy (I am seeing this with the command nmap -sV -sC -p 8080 35.208.25.61 -vvvv -Pn). Instead, I want the daemon I'm running (wings.service) to use http, so it can use that type of connection to connect to my panel.
The panel is part of a piece of software along with the daemon, it's called pterodactyl. Anyways, I have tried everything on what to do, and I think this problem that I am addressing is the problem that causes dysfunction on my panel. I might just have to move to a different service to host my bots for discord.
Let me know if there's anything I can do to fix this.
As per I can understand you are unable to access the panel via web URL.
Pterodactyl web server can be installed using NGINX or Apache web servers, and both web servers by default listed on port 80 based on Pterodactyl web server installation guide, so you must enable HTTP port 80 traffic on your Compute Engine VM instance
The default firewall rules on GCP do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them following this steps:
1.-Go to the VM instances page.
2.- Click the name of the desired instance.
3.- Click Edit button at the top of the page.
4.- Scroll down to the Firewalls section.
5.- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
6.- Click Save.
Note: The Pterodactyl panel and Daemon installation are not the same for each operating system, if after checking the VPC firewall rules on the VM settings and also the status of the web server in the instance (NGINX or Apache) you still cannot access your panel, please provide a step by step list with all commands you followed to complete the installation, including the OS version you used.

Multiple dokku apps one domain

The behavior I want:
If the user goes to http://www.example.com/{anything-but-admin} one dokku app responds.
However if the user goes to http://www.example.com/admin a different dokku app responds.
Does dokku provide a simple way to do this? I believe I would have to disable the proxy port mapping and add a custom nginx implementation, but even if I do that, the docs specify
If a proxy is disabled, Dokku will bind your container's port to a random port on the host for every deploy, e.g. 0.0.0.0:32771->5000/tcp.
If this is the correct thing to do, how do I force a static port number, so I can add that port number to my custom nginx configuration?
You can deploy two apps and have one of the apps reference the other's upstream.

How to point a Dokku app at the root domain of the dokku server

How do I point a dokku app that will set up in the dokku server, to point at the root domain of the server itself. Suppose my domain is apps.com and the app to be implemented is called botapp. If I use virtualhost naming, and do git remote add dokku dokku#apps.com:botapp it will get pointed at botapp.apps.com. What do I do to get the botapp pointed at apps.com itself (the root domain).
Also, how do I know what port a dokku app is rooting, inspite of using subdomains (virtualhost naming)?
As of v0.3.10, Dokku ships with a domains plugin. This lets you easily add domains to your app. By default your app is located at myapp.mydomain.com. If you want your app to be accessible via the root domain, then just add the root domain as one of your app's domains. dokku domains:add myapp mydomain.com.
That was really straightforward, the docs need to be updated to reflect this, really.
For your second question, your app is not visible to the outside world. Your app is running inside its own docker container, with its own local IP address. If you still want to find out what port your app has exposed, you can run docker ps on your server.

Can Nxingx do Reverse Proxy updates without downtime

I have nginx that I am using to receive traffic for multiple domains on port 80 each with upstream to different application servers on application specific ports
e.g
abc.com:80 --> :3345
xyz.com:80 --> :3346
Is it possible to
1. add/delete domains (abc/xyz) without downtime
2. change application level port mapping (3345,3346) without downtime
If nginx can't do it, is there any other service that can do it without restarting the service and incurring downtime ?
Thanks in advance
In short: Yes.
Typically, you'd overwrite the existing config file(s) in place while nginx is running, test it using nginx -t and once everything is fine, reload nginx using nginx -s reload. This will cause nginx to spawn new worker processes which use your new config while old worker processes are being shut down gracefully.. Graceful means closing listen sockets while still serving currently active connections. Every new request/connection will use the new config.
Note that in case nginx is not able to parse the new config file(s), the old config will stay in place.

docker registry on localhost with nginx proxy_pass

I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.

Resources