I'm unsure of the best way to set ports in PM2. I don't see this documented anywhere. I'm using a front-facing nginx server which listens on proxys to specific ports on the backend that represent the node servers. How do I best set this type of configuration up?
One method is with env in process.json
{
"name" : "MyApp",
"script" : "./MyApp/app.js",
"instances" : "1",
"exec_mode" : "cluster_mode",
"env": {"PORT": 3030}
}
Related
I am using Ansible to create a server in the Hetzner Cloud, the playbook reads:
- name: create the server at Hetzner
hetzner.hcloud.hcloud_server:
name: "{{server_hostname}}"
enable_ipv4: false
enable_ipv6: false
server_type: cx11
location: "{{server_location}}"
image: ubuntu-22.04
ssh_keys:
- "mykey"
state: present
api_token: "{{hetzner_secret}}"
private_networks: ipfire
register: server
My aim is to integrate the new server into the private network named 'ipfire' that I have previously created. The server should not be accessible via the internet, so I have disabled ipv4 and ipv6. Rather, I'd like to access the server by connecting via OpenVPN to the private network 'ipfire' and connect by use of ssh from there.
Unfortunately, I get an error message as follows:
PLAY [Order servers] ********************************************************************************************************
TASK [hetznerserver : create the server at Hetzner] *************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (hetzner.hcloud.hcloud_server) module: private_networks. Supported parameters include: rebuild_protection, api_token, location, enable_ipv6, upgrade_disk, ipv4, endpoint, ipv6, firewalls, server_type, state, force, labels, ssh_keys, delete_protection, image, id, name, enable_ipv4, placement_group, force_upgrade, user_data, datacenter, rescue_mode, allow_deprecated_image, volumes, backups."}
PLAY RECAP ******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The module private_networks does not seem to work like this?
Error messages like Unsupported parameters for (<moduleName>) module: <givenParameter>. Supported parameters include: <supportedParametersList> are usually syntax errors of the module used.
Therefore one may need to look up the respective documentation, in the example case hcloud_server module – Create and manage cloud servers on the Hetzner Cloud.
If the documentation shows the Parameters in question are available, it indicates
either a version mismatch of module used, means the used version is too old and an update is necessary
or an bug within the module code and further debugging and investigation within the module code is necessary
Code and Documentation Links
Community Authors> hetzner> hcloud
ansible-collections / hetzner.hcloud
After further investigation it might turn out that the parameter in question was introduced recently, in example
Github hetzner.hcloud Issue #150 "Unable to create cloud server without public ipv4 and ipv6"
Github hetzner.hcloud Pull #160 "Add possibility to specify private network when creating or updating servers"
which indicates in your example case that you'll need to update the Ansible Collection module in question since the parameter wasn't introduced in your used version of the module but as of v1.9.0.
I have achieved a small test cloud on 3 pieces of hardware. It works fine when in EDGE mode but when I try to configure it for VPCMIDO, new instances begin to launch but then timeout and move to a terminated state. I can also see the instances' initial volume and config data appear in the NC and CC data directories. Below is my system layout and network.json.
HOST 1 : CLC/UFS/WALRUS/MIDO CLUSTER/MIDO GATEWAY/MIDOLMAN AGENT:
em1 (All Services including Mido Cluster): 10.0.0.21
em3 (Target VPCMIDO Adapter): 10.0.0.22
HOST 2 : CC/SC
em1 : 10.0.0.23
HOST 3 : NC/MIDOLMAN AGENT
em1 : 10.0.0.24
{
"Mido": {
"Gateways": [
{
"Ip": "10.0.0.22",
"ExternalDevice": "em3",
"ExternalCidr": "192.168.0.0/16",
"ExternalIp": "192.168.0.2",
"ExternalRouterIp": "192.168.0.1"
}
]
},
"Mode": "VPCMIDO",
"PublicIps": [
"10.0.100.1-10.0.100.254"
]
}
I may be misunderstanding the intent of reserving an interface just for the mido gateway. All of my eucalyptus/zookeeper/cassandra/midonet configs use the 10.0.0.21 interface and seem to communicate fine. The midonet tunnel zone reports my CLC host and NC host successfully in the tunnel zone. The only part of my config that references the interface I intend to use for the midonet gateway is the network.json. No errors were returned at any time during my config so I think I may be missing something conceptual.
You may need to start eucanetd as described here:
https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/starting_euca_clc.html
The eucanetd component in vpcmido mode runs on the cloud controller and is responsible for controlling midonet.
When eucanetd is not running instances will fail to start as the required network resources will not be created.
I configured a bridge on the NC and instances were able to launch and I no longer got an error in my nc.log. Docs and the eucalyptus.conf file comments tell me I shouldn't need to do this in VPCMIDO netowrking mode: https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configuring_bridge.html
Despite all that adding the bridge fixed this issue.
In my router setting, I have set as following:
port 5000 - my synology disk station entry
port 1337 - my router panel
And I have set some subdomain to my external IP using DDNS:
wifi.example.com
disk.example.com
www.example.com
What I want to do is:
wifi.example.com redirect to port 1337 and go into router panel
disk.example.com redirect to port 5000 and go into synology panel
www.example.com go to synology web station server
I have tried set port 80 to my disk station in router setting, and want to do redirection in disk station via nginx or reverse proxy which provided by synology. However, I cannot find the nginx.conf in the synology. I tried set the reverse proxy but fail.
Can anyone provide me any clues for this? Appreciate for any help.
I think it must be possible...
unfortunateley it is not possible to specify a port with a DNS entry
but I think the setup you wish must be possible with nginx
the nginx configuration on the Synology is somewhat different
so when you setup wifi.example.com to your home ip-address
then you have to configure your router to port-foward port 80 to port 80 on your synology
on your synology you need to configure the www-station, so the nginx is really the backend. In my case it was first set to Apache
in the www-station you have to create a virtual host entry
after that you have to open a ssh session to your synology
and do a: cd /var/packages/WebStation/etc
the Nginx configuration is in the VirtualHost.json file
in this case you should see something like below:
"eeb4adef-1fc5-4fd5-bacc-1fbd1e747d1c" : {
"backend" : 0,
"fqdn" : "wifi.example.com",
"https" : {
"compatibility" : 1,
"compression" : false,
"hsts" : false,
"http2" : false,
"redirect" : false
},
"index" : [ "index.html", "index.htm", "index.cgi", "index.php", "index.php5" ],
"php" : 4,
"port" : {
"http" : [ 80 ],
"https" : [ 443 ]
},
"root" : "/volume1/web/wiki"
},
"version" : 2
}
now you can remove the "root" line and replace it with:
"return" : "301 http://someurl:1337$request_uri"
After this modification you will have to restart the WebStation service. And when you look with the Virtual Host in WebStation via DSM it will be now presented as AbNormal, but the redirect will (in my case) be working.
I hope this input is usefull for your question.
You can also try my solution:
redirect-http-to-https
Set up a docker container for redirect.
Then you can forword 80(with subdomain) to the docker port and forword 443 to the application port
This will not change any default setting which will avoid the potential problem.
Cross-Post from serverfault
Since I have seen similar issues posted here, I will X-post my question from ServerFault (https://serverfault.com/questions/855120/multi-container-docker-on-aws-nginx-use-host-machine-etc-hosts-resolver). I hope this
is permitted.
I have a multi-container docker environment on Amazon Elastic Beanstalk with the following Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "web",
"memoryReservation": 256,
"image": "my/nginx/repo/image",
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"api"
],
"essential": true
},
{
"name": "api",
"memoryReservation": 256,
"image": "my-api/repo",
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 80
}
]
}
]
}
Ultimately I want the node app served by nginx to resolve requests to named addresses from linked containers, so in my web image (node app) I'd like to make a request to http://api/some/resource and let nginx resolve that to the api container.
Now, since docker adds a host entry for the api container due to the specified link, I want the nginx server to resolve addresses from the hosts etc/hosts file, however as I found out, nginx uses it's own resolver. After researching the issue a bit I found out that in non-Elastic Beanstalk multi-container solutions and with user-defined networks, the resolver would be provided by docker on 127.0.0.11, however since it is currently not possible to define user-defined networks in the Dockerrun.aws.json, I keep looking for a different solution. The links can be resolved inside the container, pinging api does work, however, nginx does it's own thing there.
I have read about dnsmasq as well, however, I wanted to get this running without installing this package, do I even have a choice here ?
This is a service discovery topic, and I don't think this solution is in a right direction.
It is valid that links can be resolved to the App's IP; however, it must follow the starting order - App first and then Nginx. If the App recreates, or scales after Nginx started, /etc/hosts in Nginx won't be changed. It is possible to recreate the Nginx for updating the /etc/hosts, but then all the connections to the Nginx will be disconnected.
Since you are using Amazon Elastic Beanstalk, I think a better solution is to use Consul (Service Discovery) + Registrator (Service Registration ) + Nginx (with SRV support, Plus or third party).
Personally, I have done a similar thing without AEB. I am using Docker Swarm (Service Discovery + Service Registration) + Nginx (with a modified HAproxy-SRV)
I hope this answer can help your decision.
I'm setting up multiple virtual servers using docker and managing the routing with an nginx reverse proxy (https://hub.docker.com/r/jwilder/nginx-proxy/).
I already have a couple other dockers (for mysql and wp)
I would like to use mupx to deploy thanks to it's ease of use (https://github.com/arunoda/meteor-up/tree/mupx#), though it is require to provide a port and defaults to 80 (which conflicts with nginx).
Here are the relevant elements from the mup.json
"servers": [
{
"host": "111.111.111.111",
"username": "root",
"pem": "path/to/key",
"env":{
"VIRTUAL_HOST":"subdomain.domain.com"
}
}
],
...
"env": {
"ROOT_URL": "http://subdomain.domain.com"
}
Anyone have any experience with this?
I think you can change the port of the app to avoid the conflict
"env": {
"PORT": 80, // change to anything?