I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.
Related
I set up a Cloudflare account and redirected my domain to its nameservers. During setup I left all the settings at default.
I added two DNS records:
An "A" record targetting my IP address and a "CNAME" record creating an alias for it.
In my Nginx Proxy Manager (running in Docker on a bridged network connected with a database), there is only one proxy host directing the "CNAME" alias to a LAN IP (https://192.168.0.50:9443; Portainer operates on HTTPS).
Everything works flawlessly until I decide to add an SSL certificate. The only option I tick is "Force SSL". When I try to access the site at this point, it loads for a bit and then times-out to the "522" error. The only way I can get the site to work is to clear the Nginx volumes and restart the stack.
Turning Cloudflare proxy off doesn't seem to make any difference. Neither does trying to access different docker containers operating on HTTP
I managed to solve the problem. I didn't forward port "443" on my router to the target device... I hope that this helps anyone else who made this mistake.
I am currently running a service with systemctl, and it is running as an http proxy, not normal http. Is this something that Google does? I am using port 8080 and I can't connect to it via http. My daemon is using port 8080, while using the type http-proxy (I am seeing this with the command nmap -sV -sC -p 8080 35.208.25.61 -vvvv -Pn). Instead, I want the daemon I'm running (wings.service) to use http, so it can use that type of connection to connect to my panel.
The panel is part of a piece of software along with the daemon, it's called pterodactyl. Anyways, I have tried everything on what to do, and I think this problem that I am addressing is the problem that causes dysfunction on my panel. I might just have to move to a different service to host my bots for discord.
Let me know if there's anything I can do to fix this.
As per I can understand you are unable to access the panel via web URL.
Pterodactyl web server can be installed using NGINX or Apache web servers, and both web servers by default listed on port 80 based on Pterodactyl web server installation guide, so you must enable HTTP port 80 traffic on your Compute Engine VM instance
The default firewall rules on GCP do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them following this steps:
1.-Go to the VM instances page.
2.- Click the name of the desired instance.
3.- Click Edit button at the top of the page.
4.- Scroll down to the Firewalls section.
5.- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
6.- Click Save.
Note: The Pterodactyl panel and Daemon installation are not the same for each operating system, if after checking the VPC firewall rules on the VM settings and also the status of the web server in the instance (NGINX or Apache) you still cannot access your panel, please provide a step by step list with all commands you followed to complete the installation, including the OS version you used.
Currently i am using haproxy with manual updating backends which points to separate docker nginx containers for different apps.
What is best software to proxying request to different local nginx containers based on hostname?
I would have a simple map file or even /etc/hosts/ which my script would update when docker containers change, for example:
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
So ideal will be haproxy -> some software proxy or dns -> docker nginx
and software would use map file in fly, not reloading and point request to local ip address.
Maybe i would put varnish cache in front so it would need to be compatible with that too (and why wouldn't) so flow would be:
request -> haproxy (for load balancing in multiple servers)
-> varnish on public server ip ( for in memory caching based on host and route, so if there is cache return response immediately )
-> SOME PROXY OR DNS BASED ON SIMPLE MAP FILE which will further proxy to local ip of one of multiple docker nginx containers
-> docker nginx inside custom network
-> some app in container
What is best practice for this flow, should i put varnish somewhere else, and what is a software i am seeking for?
I am currently using one extra nginx and mapping $host to custom ip address in custom maps.conf file and gracefully reloading nginx on change, but i got feeling that there is better solution for this.
Also i forgot to mention that i dont need only http proxying based on map file, but tcp (ssh, smtp, ftp..) too, just in those cases i will not have haproxy and varnish in front and this app would be public faced on those port.
for example:
port:22
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
port:25
domain1 1.1.1.4
domain2 1.1.1.5
domain3 1.1.1.6
I think something like Muguet might solve your issue.
From their github repo:
When using Docker, it's sometimes a pain to access your containers
using specific IPs/ports.
Muguet provides you with a DNS Server that resolves auto-generated
hostnames to your containers IPs, plus a Reverse Proxy to access all
your web apps on port 80.
I think what you want is dnsmasq. This basically is a lightweight DNS service you run on your host running docker containers and it allows you to use hostnames instead of IP addresses. It's a pretty common way to solve this issue.
A nice guide to setting up dnsmasq can be found at:
http://docs.blowb.org/setup-host/dnsmasq.html
and searching dnsmasq and docker will point you to many more resources.
One thing to remember is on your haproxy host, make sure you modify the /etc/resolv.conf to include your dnsmasq server.
I'm currently trying to run two containers on a single host, one being an application (Ruby on Rails) and the other Nginx as a reverse proxy and cache. The app is running on TCP port 80. What I want to be able to do is bring down my application container, remove it and then bring it up again without having to restart nginx. The problem is that Nginx only seems to look up the IP of the container once, so if it goes down then back up at a different address then Nginx will just complain that there's nothing there.
I've tried a few things:
Using resolver 127.0.0.11 valid=5 to use Docker's DNS
Using an upstream block
Using a variable to try to get nginx to resolve at runtime.
I'm not sure where else to look but none of these options work if the application is brought up on a different IP address. Is there something I'm missing making this impossible?
Thanks.
Ended up reading through the 12 factor app which inspired me to remove the Nginx proxying to Rails upstream altogether, and instead used it as a proxy cache which has an upstream of the external DNS name.
I've been trying to stream flv content from my openshift cartridge using nginx + rtmp module.
On my local machine, with the attached configuration, everything works just fine (I use ffplay for testing, e.g. ffplay rtmp://localhost:8080/test/streamkey)
When I try with the same configuration on openshift, I get the following error:
HandShake: Type mismatch: client sent 3, server answered 60 f=0/0
RTMP_Connect1, handshake failed.
However, if I enable port-forwarding and test the stream server using ffplay rtmp://127.0.0.1:8080/test/streamkey, everything works fine. here are my port forwardings:
rhc port-forward myappname
Checking available ports ... done
Forwarding ports ...
To connect to a service running on OpenShift, use the Local address
Service Local OpenShift
------- -------------- ---- -----------------
nginx 127.0.0.1:8080 => 127.10.103.1:8080
My cartridge is a "diy-0.1" cartridge. nginx 1.7.6 (also tested 1.4.4) + rtmp-module.
I suspect there are some issues with some proxy (apache?) that uses openshift for handling gears, maybe it does not allow rtmp headers(?)?
NB: Configuring nginx http-only works fine.
Can anybody help? I'm stuck, I think this is the first time I ask something on stackoverflow :-)
The nginx configuration (NB: the "play" path and the IP:PORT are taken using the openshift environment variables.):
rtmp {
server {
listen 127.10.103.1:8080;
chunk_size 8192;
application test {
play /var/lib/openshift/54da37644382ece45c000139/app-root/runtime/repo/public;
}
}
}
There is an apache proxy in front of your application on OpenShift Online, and it is possible that the content is trying to be streamed as HTTP traffic instead of RTMP traffic, that is why you are getting the content mismatch, but if you do it through the port-forward, you are gaining direct access to your application and bypassing the proxy. That is why it works fine with the port forward. There is currently no way to bypass the apache reverse proxy through the public ip, please see this developer portal article for more information about how requests are routed to your application: https://developers.openshift.com/en/managing-port-binding-routing.html