I am looking to configure NIFI UI access via HTTP. I've set the values necessary (Or so I thought) in nifi.properties.
properties set:
nifi.web.http.host=192.168.1.99
nifi.web.http.port=8080
I know NIFI does allow for both HTTP and HTTPS to be used simultaneously so I removed the below default values and left them unset:
nifi.web.https.host=
nifi.web.https.port=
Once I saved this file, I restarted the service systemctl restart nifi.service to see if it would read the new config file. I ran netstat -plnt to see if the port was open to no avail.
Did you set the HTTP values? If not, you're not providing a port for NiFi to listen to.
Related
I am currently running a service with systemctl, and it is running as an http proxy, not normal http. Is this something that Google does? I am using port 8080 and I can't connect to it via http. My daemon is using port 8080, while using the type http-proxy (I am seeing this with the command nmap -sV -sC -p 8080 35.208.25.61 -vvvv -Pn). Instead, I want the daemon I'm running (wings.service) to use http, so it can use that type of connection to connect to my panel.
The panel is part of a piece of software along with the daemon, it's called pterodactyl. Anyways, I have tried everything on what to do, and I think this problem that I am addressing is the problem that causes dysfunction on my panel. I might just have to move to a different service to host my bots for discord.
Let me know if there's anything I can do to fix this.
As per I can understand you are unable to access the panel via web URL.
Pterodactyl web server can be installed using NGINX or Apache web servers, and both web servers by default listed on port 80 based on Pterodactyl web server installation guide, so you must enable HTTP port 80 traffic on your Compute Engine VM instance
The default firewall rules on GCP do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them following this steps:
1.-Go to the VM instances page.
2.- Click the name of the desired instance.
3.- Click Edit button at the top of the page.
4.- Scroll down to the Firewalls section.
5.- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
6.- Click Save.
Note: The Pterodactyl panel and Daemon installation are not the same for each operating system, if after checking the VPC firewall rules on the VM settings and also the status of the web server in the instance (NGINX or Apache) you still cannot access your panel, please provide a step by step list with all commands you followed to complete the installation, including the OS version you used.
I'm working on a small Flask based service behind Nginx that serves some content over HTTP. It will do so using two-way certificate authentication - which I understand how to do with Nginx - but users must log in and load their own certificate that will be used for the auth piece.
So the scenario is:
User has a server that generates a cert that is used for client authentication.
They log into the service to upload that cert for their server.
Systems that pull the cert from the user's server can now reach an endpoint on my service that serves the content and authenticates using the cert.
I can't find anything in the Nginx docs that says I can have a single keystore or directory that Nginx looks at to match the cert for an incoming request. I know I can configure this 'per-server' in Nginx.
The idea I currently have is that I allow the web app to trigger a script that reads the Nginx conf file, inserts a new server entry and a specified port with the path to the uploaded cert and the sends the HUP signal to reload Nginx.
I'm wondering if anyone in the community has done something similar to this before with Nginx or if they have a better solution for the odd scenario I'm presenting.
After a lot more research and reading some of the documentation on nginx.com I found that I was way over complicating this.
Instead of modifying my configuration in sites-available I should be adding and removing config files from /etc/nginx/conf.d/ and then telling Nginx to reload by calling sudo nginx -s reload.
I'll have the app call a script to run the needed commands and add the script into the sudoers file for the www-data user.
My Meteor server needs to run behind an NGINX proxy which receives HTTP requests, adds the Kerberos-authenticated user name to the header and forwards them to another webserver (assumed to be NodeJS) over a Unix domain socket which is accessed through a file secured by Unix permissions.
I would like to use Meteor instead of NodeJS, but the only way I can get Meteor to listen on a Unix domain socket is to hack a file called run-proxy.js deep inside my Meteor installation and modify a call to server.listen(...) to pass it a file name instead of a port number.
This works, but is there a better way to achieve this? Ideally without modifying Meteor's code. I did try meteor --port /home/me/file_name but it complains that there is no port number.
Is there any way I can configure ngnix other than through the normal ngnix.conf file ?
Like xml configuration or memcache or any other ways..?
My objective is to add/remove upstreams to the configuration dynamically. Ngnix doesnt seem to have a direct solution for this so I was planning to play with the configuration file, but I am finding it very difficult and error prone to modify the file through script/programs.
Any suggestions ?
No. You can't. The only way to "dynamically" reconfigure nginx is to process the config files in external software and then reload the server. Neither you can "program" config like in Apache. The nginx config is mostly a static thing which is praised for its performance.
Source: I needed it too, done some research.
Edit: I have a "supervising" tool installed on my hosts that monitors load and clusters and such. I've ended up implementing the upstreams scaling through it. Whenever a new upstream is ready, it notifies my "supervisor" on all web servers. The "supervisors" then query for served "virtual hosts" on the new upstream and add all of them to their context on the nginx host. then it just nginx -t && nginx -s reload everything. This is for nginx fastcgiing to php-fpms.
Edit2: I have many server blocks for different server_names (sites), each has an upstream associated to it on another host(s). In the server block I have include /path/to/where/my/upstream/configs/are/us-<unique_site_id>.conf line. the us-<unique_site_id>.conf is generated when the server block is created and populated with existing upstream(s) information. When there are changes in the upstreams pool or the site configuration, the file is rewritten to reflect it.
I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.