I've been trying to stream flv content from my openshift cartridge using nginx + rtmp module.
On my local machine, with the attached configuration, everything works just fine (I use ffplay for testing, e.g. ffplay rtmp://localhost:8080/test/streamkey)
When I try with the same configuration on openshift, I get the following error:
HandShake: Type mismatch: client sent 3, server answered 60 f=0/0
RTMP_Connect1, handshake failed.
However, if I enable port-forwarding and test the stream server using ffplay rtmp://127.0.0.1:8080/test/streamkey, everything works fine. here are my port forwardings:
rhc port-forward myappname
Checking available ports ... done
Forwarding ports ...
To connect to a service running on OpenShift, use the Local address
Service Local OpenShift
------- -------------- ---- -----------------
nginx 127.0.0.1:8080 => 127.10.103.1:8080
My cartridge is a "diy-0.1" cartridge. nginx 1.7.6 (also tested 1.4.4) + rtmp-module.
I suspect there are some issues with some proxy (apache?) that uses openshift for handling gears, maybe it does not allow rtmp headers(?)?
NB: Configuring nginx http-only works fine.
Can anybody help? I'm stuck, I think this is the first time I ask something on stackoverflow :-)
The nginx configuration (NB: the "play" path and the IP:PORT are taken using the openshift environment variables.):
rtmp {
server {
listen 127.10.103.1:8080;
chunk_size 8192;
application test {
play /var/lib/openshift/54da37644382ece45c000139/app-root/runtime/repo/public;
}
}
}
There is an apache proxy in front of your application on OpenShift Online, and it is possible that the content is trying to be streamed as HTTP traffic instead of RTMP traffic, that is why you are getting the content mismatch, but if you do it through the port-forward, you are gaining direct access to your application and bypassing the proxy. That is why it works fine with the port forward. There is currently no way to bypass the apache reverse proxy through the public ip, please see this developer portal article for more information about how requests are routed to your application: https://developers.openshift.com/en/managing-port-binding-routing.html
Related
I recently moved over from coder/code-server to microsoft's implementation of a code server. Included in this is the ability to set up tunnels to the remote host, however i can't seem to get it to work. I'm using nginx to forward to the code-server service as a reverse proxy which works just fine for the most part. Since the documentation doesn't include a list of networking requirements for hosting a code-server i'd like to know if there are any ports that should be open and forwarded or any additional configuration?
I've configured nginx to forward all requests to the code-server instance, but this isn't enough to get tunnels working. I can connect to it through the domain i'm listening for.
I have an Xamp server running on MacOs 10.14.6.
I activated port forwarding on it like that :
localhost:8080 -> 80 (Over SSH)
localhost:8443 -> 443 (Over SSH)
I can navigate to my website on localhost:8443 with no issue.
I have another server running Symfony 5.4 available at localhost:8000. I can navigate to this website with no issue too.
But from the 'Xamp hosted' website, I'm not able to make a request to the 'Symfony hosted' server. I have a "Connection Refused error".
I tried to request both 127.0.0.1:8000 and localhost:8000 : same behavior.
I tried to set the Symfony server to listen to a different port too : same behavior.
It only works when I make a request to the real local ip address (like 192.168.1.34:8000). The problem is that this IP is not stable so I'd like to make the localhost work properly.
What could be the cause of this error ?
What can I do to use localhost hostname for my requests ?
you can disable the task manager section for applications running on the same port which may when you want to request a connection on the server you will use it will get a similar rejection. I think it's more or less like that
I am currently running a service with systemctl, and it is running as an http proxy, not normal http. Is this something that Google does? I am using port 8080 and I can't connect to it via http. My daemon is using port 8080, while using the type http-proxy (I am seeing this with the command nmap -sV -sC -p 8080 35.208.25.61 -vvvv -Pn). Instead, I want the daemon I'm running (wings.service) to use http, so it can use that type of connection to connect to my panel.
The panel is part of a piece of software along with the daemon, it's called pterodactyl. Anyways, I have tried everything on what to do, and I think this problem that I am addressing is the problem that causes dysfunction on my panel. I might just have to move to a different service to host my bots for discord.
Let me know if there's anything I can do to fix this.
As per I can understand you are unable to access the panel via web URL.
Pterodactyl web server can be installed using NGINX or Apache web servers, and both web servers by default listed on port 80 based on Pterodactyl web server installation guide, so you must enable HTTP port 80 traffic on your Compute Engine VM instance
The default firewall rules on GCP do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them following this steps:
1.-Go to the VM instances page.
2.- Click the name of the desired instance.
3.- Click Edit button at the top of the page.
4.- Scroll down to the Firewalls section.
5.- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
6.- Click Save.
Note: The Pterodactyl panel and Daemon installation are not the same for each operating system, if after checking the VPC firewall rules on the VM settings and also the status of the web server in the instance (NGINX or Apache) you still cannot access your panel, please provide a step by step list with all commands you followed to complete the installation, including the OS version you used.
I am trying to get GitLab setup with my current installation of Nginx but I keep getting an Error 502. I have included my configuration files, and not sure what I am doing wrong. But I followed the "Using a non-bundled web-server" steps on https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md
/etc/nginx/conf.d/gitlab-omnibus-nginx.conf
http://pastebin.com/bQ8eCiNh
/etc/gitlab/gitlab.rb
http://pastebin.com/Lw5tjwXy
HTTP 502 means "The server was acting as a gateway or proxy and received an invalid response from the upstream server." So there are two possibilities here.
Your Gitlab server is not actually working or is returning an invalid response. After starting the Gitlab server, use sudo netstat -plnt and make sure it is running on a port and note the port. Then connect directly to this port in your browser (or from the CLI on the server if necessary) and confirm that Gitlab is working fine without a proxy in front of it. If Gitlab is running on a socket and not a port, there are also tools to test HTTP servers through socket connections that you can use.
Nginx is not configured correctly to connect to Gitlab. In this case, check your Nginx error log to see if there is any more detail besides the "502" error.
I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.