I'm kinda new on setting up a production machine and I don't get why I'm not seeing the default index page for nginx on my EC2 machine. It's installed and running on my server, but when I try to access, it keeps loading and shows nothing, keeps on a blank page. I'm trying to access through the public ip (35.160.22.104) and through public dns(ec2-35-160-22-104.us-west-2.compute.amazonaws.com). Both does the same. What I'm doing wrong?
UPDATE:
I realized that when restarting nginx service, it didn't showed the "ok" message. So I took a look at error.log:
[ 2016-12-12 17:16:11.2439 709/7f3eebc93780 age/Cor/CoreMain.cpp:967 ]: Passenger core shutdown finished
2016/12/12 17:16:12 [info] 782#782: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:71
[ 2016-12-12 17:16:12.2742 791/7fb0c37a0780 age/Wat/WatchdogMain.cpp:1291 ]: Starting Passenger watchdog...
[ 2016-12-12 17:16:12.2820 794/7fe4d238b780 age/Cor/CoreMain.cpp:982 ]: Starting Passenger core...
[ 2016-12-12 17:16:12.2820 794/7fe4d238b780 age/Cor/CoreMain.cpp:235 ]: Passenger core running in multi-application mode.
[ 2016-12-12 17:16:12.2832 794/7fe4d238b780 age/Cor/CoreMain.cpp:732 ]: Passenger core online, PID 794
[ 2016-12-12 17:16:12.2911 799/7f06bb50a780 age/Ust/UstRouterMain.cpp:529 ]: Starting Passenger UstRouter...
[ 2016-12-12 17:16:12.2916 799/7f06bb50a780 age/Ust/UstRouterMain.cpp:342 ]: Passenger UstRouter online, PID 799
Anyway, it doesn't looks like an error, but a usual log.
UPDATE 2:
Nginx is running:
root 810 1 0 17:16 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 815 810 0 17:16 ? 00:00:00 nginx: worker process
ubuntu 853 32300 0 17:44 pts/0 00:00:00 grep --color=auto nginx
And when I try do curl localhost, it returns the HTML as expected!
UPDATE3:
When I run systemctl status nginx, I get the following error:
Dec 12 17:54:48 ip-172-31-40-156 systemd[1]: nginx.service: Failed to read PID from file /run/nginx.pid: Invalid argument
Trying to figure out what it is
UPDATE4:
Ran the command nmap 35.160.22.104 -Pn PORT STATE SERVICE 22/tcpand got the output:
Starting Nmap 7.01 ( https://nmap.org ) at 2016-12-12 18:05 UTC
Failed to resolve "PORT".
Failed to resolve "STATE".
Failed to resolve "SERVICE".
Unable to split netmask from target expression: "22/tcp"
Nmap scan report for ec2-35-160-22-104.us-west-2.compute.amazonaws.com (35.160.22.104)
Host is up (0.0015s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
UPDATE5:
Output for netstat -tuanp | grep 80:
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
Your ec2 instance have a security group associated.
You should go to AWS console EC2 -> Instances -> Click on your instance -> On the bottom 'Description' -> Security Group. Click on the name and you will be redirect to EC2-> Network and Security. Click on 'Edit inbound rules' Add a rule:
Type: HTTP
Save. And that should be fine!
Related
I am trying to deploy an application with k3s kubernetes. Currently I have two master nodes behind a load-balancer, and I have some issues connecting worker nodes to them. All nodes and the load-balancer runs in seperate vms.
The load balancer is a nginx server with the following configuration.
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
events {}
stream {
upstream k3s_servers {
server {master_node1_ip}:6443;
server {master_node2_ip}:6443;
}
server {
listen 6443;
proxy_pass k3s_servers;
}
}
the master nodes connects through the load-balancer, and seemingly it works as expected.
ubuntu#ip-172-31-20-78:/$ sudo k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-33-183 Ready control-plane,master 81m v1.20.2+k3s1
ip-172-31-20-78 Ready control-plane,master 81m v1.20.2+k3s1
However the worker nodes yields an error about the SSL certificate?
sudo systemctl status k3s-agent
● k3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-01-24 15:54:10 UTC; 19min ago
Docs: https://k3s.io
Process: 3065 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 3066 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 3067 (k3s-agent)
Tasks: 6
Memory: 167.3M
CGroup: /system.slice/k3s-agent.service
└─3067 /usr/local/bin/k3s agent
Jan 24 16:12:23 ip-172-31-27-179 k3s[3311]: time="2021-01-24T16:34:02.483557102Z" level=info msg="Running load balancer 127.0.0.1:39357 -> [104.248.34.
Jan 24 16:12:23 ip-172-31-27-179 k3s[3067]: time="2021-01-24T16:12:23.313819380Z" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:339
level=error msg="failed to get CA certs: Get "https://127.0.0.1:39357/cacerts": EOF"
if I try to change K3S_URL in /etc/systemd/system/k3s-agent.service.env to use http, I get an error saying that only https is accepted.
Using the IP Address instead of the hostname in k3s-agent.service.env works for me. Not really a solution as much as a workaround.
/etc/systemd/system/k3s-agent.service.env
K3S_TOKEN='<token>'
K3S_URL='192.168.xxx.xxx:6443'
My current CentOS 7 server is already running Apache web server 2.4x, using default ports 80 and 443. The Puppet Enterprise version 2019.x, using nginx(pe-nginx to be exact), is configured by default to use the exact same ports.
What needs to be changed to make pe-nginx web server use ports 8090 and 444 instead of the default 80 and 443?
According to: https://puppet.com/docs/pe/2019.0/config_console.html I should Disable the HTTPS redirect. Here were instructions I tried:
The pe-nginx webserver listens on port 80 by default. If you need to run your own service on port 80, you can disable the HTTPS redirect.
Edit your Hiera.yaml file to disable HTTP redirect.
puppet_enterprise::profile::console::proxy::http_redirect::enable_http_redirect: false
This is the modified file: /etc/puppetlabs/code/environments/production/hiera.yaml
---
version: 5
defaults:
# The default value for "datadir" is "data" under the same directory as the hiera.yaml
# file (this file)
# When specifying a datadir, make sure the directory exists.
# See https://puppet.com/docs/puppet/latest/environments_about.html for further details on environments.
# datadir: data
# data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "nodes/%{::trusted.certname}.yaml"
- name: "Other YAML hierarchy levels"
paths:
- "common.yaml"
puppet_enterprise::profile::console::proxy::http_redirect::enable_http_redirect: false
I am new to yaml but can see that this is probably not right but tried it anyway.
It does not say what to do after changing the file to implement the change, this is what I tried:
puppet infrastructure configure --recover
Notice: Unable to recover PE configuration: The Lookup Configuration at '/etc/puppetlabs/code/environments/production/hiera.yaml' has wrong type, unrecognized key 'puppet_enterprise::profile::console::proxy::http_redirect::enable_http_redirect'
2019-05-07T15:41:29.722+00:00 - [Notice]: Compiled catalog for tadm10-adm.test.hfgs.net in environment enterprise in 2.08 seconds
2019-05-07T15:41:42.489+00:00 - [Notice]: Applied catalog in 12.05 seconds
netstat -tulpn | grep -v tcp6|grep ":443\|:80\|:8090\|:444"
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 32272/nginx: master
While I never could figure out how to accomplish this using Puppet Labs suggestion of modifying the hiera.yaml file I have figured out how to do this using the Web Console.
The modifications remove all conflicts with the existing Apache httpd that uses ports 80 and 443.
Access to the PE Web Console will now need to be accessed via port 444
This is the fix:
From the Web Console
Select Configure
Select Classification
Select the + icon labeled "PE Infrastructure" production to display Classes
Select PE Console production link
Select Configuration tab
Under "Classes" section - Add new class
Select "puppet_enterprise::profile::console::proxy::http_redirect" from the list
Select Add class button
Select commit 1 change
New class now shows on the page,
Select parameter name: enable_http_redirect from the list
Set value to false
Add parameter
Select commit 1 change
Select parameter name: ssl_listen_port from the list
Set value to 444
Add parameter
Select commit 1 change
When running puppet agent -t I now get an error as shown below
Duplicate declaration: Class[Puppet_enterprise::Profile::Console::Proxy::Http_redirect] is already declared;
cannot redeclare (file: /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/console/proxy.pp,
line: 211)
Remove the duplicate declaration from proxy.pp
Edit: /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/console/proxy.pp
#class { 'puppet_enterprise::profile::console::proxy::http_redirect' :
# ssl_listen_port => Integer($ssl_listen_port),
#}
Re-run puppet agent -t
puppet agent -t
Console Port(Port 443 change)
From the Web Console
Configure
Classification
Select PE Infrastructure production
Configuration tab
Class: puppet_enterprise::profile::console
Add Parameter
Parameter Name: console_port
Value: 444
Run puppet agent -t and check ports
# puppet agent -t
# netstat -tulpn | grep -v tcp6|grep ":443\|:80\|:8090\|:444"
tcp 0 0 0.0.0.0:444 0.0.0.0:* LISTEN 11182/nginx: master
Start httpd
# systemctl start httpd
# netstat -tulpn | grep -v tcp6|grep ":443\|:80\|:8090\|:444"
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 13353/httpd
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 13353/httpd
tcp 0 0 0.0.0.0:444 0.0.0.0:* LISTEN 11182/nginx: master
Accessing the PE Wen Console is now via port 444
https://hostname:444/#/inspect/overview
I am new to Datadog and NGiNX. I noticed when I was creating a monitor for some integrations several of the integrations were labeled as misconfigured. My guess is someone clicked the install button but did finish the remaining integration steps. I started to work with NGiNX and quickly hit a roadblock.
I verified it is running http status module
$ nginx -V 2>&1| grep -o http_stub_status_module
http_stub_status_module
The NGiNX install is under a different directory than is usual
and the configuration file is under
/<dir>/parts/nginx/conf
I created the status.conf file there.
When I reload the NGINX I get a failure. I don't understand what it means or how to proceed from here.
nginx: [error] open() "/<dir>/parts/nginx/logs/nginx.pid" failed (2: No such file or directory)
There is a logs directory with nothing in it.
ps -ef|grep nginx
user 35958 88952 0 May24 ? 00:00:43 nginx: worker process
user 35959 88952 0 May24 ? 00:00:48 nginx: worker process
root 88952 1 0 Feb21 ? 00:00:00 nginx: master process <dir>/parts/nginx/sbin/nginx -c <dir>/etc/nginx/balancer.conf -g pid <dir>/var/nginx-balancer.pid; lock_file /<dir>/var/nginx-balancer.lock; error_log <dir>/var/logs/nginx-balancer-error.log;
user 109169 63043 0 13:13 pts/0 00:00:00 grep --color=auto nginx
I think the issue is that our install doesn't seem to be following the same defaults as the instructions and I'm pretty sure I'm not doing this correctly.
If anyone has any insights that would be great!
Chris
I'm getting crazy with rsync which gives me a "connection refused" error. Here's my problem:
I have two servers, used to store datas, with rsync installed on because I need that both servers stay synchronized. In this way, modifies on one server will cause the same modifies on the other and viceversa. The first node (sn1) works while the second (sn2) do not. In details.
- sn1 has 192.168.13.131 as ip address
- sn2 has 192.168.13.132 as ip address
If I give rsync rsync://192.168.13.131 from sn1 or sn2, it works fine; while if I give rsync rsync://182.168.13.132 from sn1 or sn2, I get this error:
rsync: failed to connect to 192.168.13.132 (192.168.13.132): Connection refused (111)
rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]
Here's some information about sn2.
/etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.130.132
[account]
max connections = 20
path = /srv/node/
read only = False
lock file = /var/lock/account.lock
[container]
max connections = 20
path = /srv/node/
read only = False
lock file = /var/lock/container.lock
[object]
max connections = 20
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
and /etc/default/rsync
# defaults file for rsync daemon mode
#
# This file is only used for init.d based systems!
# If this system uses systemd, you can specify options etc. for rsync
# in daemon mode by copying /lib/systemd/system/rsync.service to
# /etc/systemd/system/rsync.service and modifying the copy; add required
# options to the ExecStart line.
# start rsync in daemon mode from init.d script?
# only allowed values are "true", "false", and "inetd"
# Use "inetd" if you want to start the rsyncd from inetd,
# all this does is prevent the init.d script from printing a message
# about not starting rsyncd (you still need to modify inetd's config yourself).
RSYNC_ENABLE=true
# which file should be used as the configuration file for rsync.
# This file is used instead of the default /etc/rsyncd.conf
# Warning: This option has no effect if the daemon is accessed
# using a remote shell. When using a different file for
# rsync you might want to symlink /etc/rsyncd.conf to
# that file.
# RSYNC_CONFIG_FILE=
# what extra options to give rsync --daemon?
# that excludes the --daemon; that's always done in the init.d script
# Possibilities are:
# --address=123.45.67.89 (bind to a specific IP address)
# --port=8730 (bind to specified port; default 873)
RSYNC_OPTS=''
# run rsyncd at a nice level?
# the rsync daemon can impact performance due to much I/O and CPU usage,
# so you may want to run it at a nicer priority than the default priority.
# Allowed values are 0 - 19 inclusive; 10 is a reasonable value.
RSYNC_NICE=''
# run rsyncd with ionice?
# "ionice" does for IO load what "nice" does for CPU load.
# As rsync is often used for backups which aren't all that time-critical,
# reducing the rsync IO priority will benefit the rest of the system.
# See the manpage for ionice for allowed options.
# -c3 is recommended, this will run rsync IO at "idle" priority. Uncomment
# the next line to activate this.
# RSYNC_IONICE='-c3'
# Don't forget to create an appropriate config file,
# else the daemon will not start.
Now some logs. /var/log/rsyncd.log
2018/05/04 15:10:16 [889] rsyncd version 3.1.2 starting, listening on port 873
2018/05/04 15:10:16 [889] bind() failed: Cannot assign requested address (address-family 2)
2018/05/04 15:10:16 [889] unable to bind any inbound sockets on port 873
2018/05/04 15:10:16 [889] rsync error: error in socket IO (code 10) at socket.c(555) [Receiver=3.1.2]
Output of ps aux | grep rsync command on sn2:
sn2 1555 0.0 0.1 13136 1060 pts/0 S+ 15:46 0:00 grep --color=auto rsync
Output of ps aux | grep rsync command on sn1:
sn1 12875 0.0 0.1 13136 1012 pts/0 S+ 15:48 0:00 grep --color=auto rsync
root 21281 0.0 0.2 12960 2800 ? Ss 13:31 0:00 /usr/bin/rsync --daemon --no-detach
This is the main difference I see between the two nodes.
Output of the command sudo systemctl status rsync on sn1:
rsync.service - fast remote file copy program daemon
Loaded: loaded (/lib/systemd/system/rsync.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-05-04 13:31:10 UTC; 2h 19min ago
Main PID: 21281 (rsync)
Tasks: 1 (limit: 1113)
CGroup: /system.slice/rsync.service
└─21281 /usr/bin/rsync --daemon --no-detach
May 04 13:31:10 sn1 systemd[1]: Started fast remote file copy program daemon.
Output of the same command in sn2:
rsync.service - fast remote file copy program daemon
Loaded: loaded (/lib/systemd/system/rsync.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-05-04 15:10:16 UTC; 41min ago
Process: 889 ExecStart=/usr/bin/rsync --daemon --no-detach (code=exited, status=10)
Main PID: 889 (code=exited, status=10)
May 04 15:10:15 sn2 systemd[1]: Started fast remote file copy program daemon.
May 04 15:10:16 sn2 systemd[1]: rsync.service: Main process exited, code=exited, status=10/n/a
May 04 15:10:16 sn2 systemd[1]: rsync.service: Failed with result 'exit-code'.
Output of the command sudo netstat -lptu | grep rsync on sn1:
tcp 0 0 sn1:rsync 0.0.0.0:* LISTEN 21281/rsync
while in sn2 it returns nothing...
Finally the sn2 /etc/hosts containt
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#ADDED BY ME
#10.0.2.15 sn2
192.168.13.130 proxy-server
192.168.13.131 sn1
192.168.13.132 sn2
#192.168.13.133 sn3
#192.168.13.134 sn4
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
And sn1 as well:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#ADDED BY ME
#10.0.2.15 sn1
192.168.13.130 proxy-server
192.168.13.131 sn1
192.168.13.132 sn2
#192.168.13.133 sn3
#192.168.13.134 sn4
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
I'm running ubuntu 18.04 in each server node, on a virtual machine.
Can you help me to figure out what is happening? Unfortunately I must use rsync because I'm working on OpenStack Swift, so no change are allowed :)
Ok, I've done. It was simply, just gain super user privilege and then enable and start rsync process:
sudo su
enter your password, then digit
systemctl enable rsync
systemctl start rsync
if you don't have a systemctl based terminal just use "service" instead.
service rsync restart
You can check that rsync is now working by accessing to /var/log/rsyncd.log. The bind() error is now gone.
With default config, I always do rsync -rtP /home/me/source/ x.x.x.x:/home/someoneelse/source (where x.x.x.x is an actual IP address). I am not aware when or if rsync:// would ever need to be specified as the protocol. In my case, I had your error until I enabled sshd. I also installed rsync-daemon but I do not actually know whether that is needed. Here is the entire solution (I only did this on the remote computer--I believe the local computer, on which I then could run the command above successfully, only had openssh and rsync packages):
sudo dnf -y install rsync-daemon openssh
sudo systemctl enable rsyncd
sudo systemctl start rsyncd
sudo systemctl enable sshd
sudo systemctl start sshd
To be clear, I have Fedora 27, sudo systemctl status firewalld says firewall is running, and I did not have to manually create firewall rules of which I'm aware (there are no instances of firewall or iptables in my bash history). In the command (at the top of this answer) that successfully runs, I used some options but they are not required: r: recursive, t: copy timestamp to destination file,P: show progress. A forward slash (/) is at the end of only the source path, so that rsync doesn't create a directory called /home/someoneelse/source/source in the destination.
I ssh to the dev box where I am suppose to setup Redmine. Or rather, downgrade Redmine. In January I was asked to upgrade Redmine from 1.2 to 2.2. But the plugins we wanted did not work with 2.2. So now I'm being asked to setup Redmine 1.3.3. We figure we can upgrade from 1.2 to 1.3.3.
In January I had trouble getting Passenger to work with Nginx. This was on a CentOS box. I tried several installs of Nginx. I'm left with different error logs:
This:
whereis nginx.conf
gives me:
nginx: /etc/nginx
but I don't think that is in use.
This:
find / -name error.log
gives me:
/opt/nginx/logs/error.log
/var/log/nginx/error.log
When I tried to start Passenger again I was told something was already running on port 80. But if I did "passenger stop" I was told that passenger was not running.
So I did:
passenger start -p 81
If I run netstat I see something is listening on port 81:
netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:81 localhost:42967 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:51874 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62993 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62905 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:50886 ESTABLISHED
tcp 0 0 localhost:81 localhost:42966 TIME_WAIT
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62992 ESTABLISHED
tcp 0 0 localhost:42967 localhost:81 ESTABLISHED
but if I point my browser here:
http: // 10.0.1.253:81 /
(StackOverFlow does not want me to publish the IP address, so I have to malform it. There is no harm here as it is an internal IP that no one outside my company could reach.)
In Google all I get is "Oops! Google Chrome could not connect to 10.0.1.253:81".
I started Phusion Passenger at the command line, and the output is verbose, and I expect to see any error messages in the terminal. But I'm not seeing anything. It's as if my browser request is not being heard, even though netstat seems to indicate the app is listening on port 81.
A lot of other things could be wrong with this app (I still need to reverse migrate the database schema) but I'm not seeing any of the error messages that I expect to see. Actually, I'm not seeing any error messages, which is very odd.
UPDATE:
If I do this:
ps aux | grep nginx
I get:
root 20643 0.0 0.0 103244 832 pts/8 S+ 17:17 0:00 grep nginx
root 23968 0.0 0.0 29920 740 ? Ss Feb13 0:00 nginx: master process /var/lib/passenger-standalone/3.0.19-x86_64-ruby1.9.3-linux-gcc4.4.6-1002/nginx-1.2.6/sbin/nginx -c /tmp/passenger-standalone.23917/config -p /tmp/passenger-standalone.23917/
nobody 23969 0.0 0.0 30588 2276 ? S Feb13 0:34 nginx: worker process
I tried to cat the file /tmp/passenger-standalone.23917/config but it does not seem to exist.
I also killed every session of "screen" and every terminal window where Phusion Passenger might be running, but clearly, looking at ps aux, it looks like something is running.
Could the Nginx be running even if the Passenger is killed?
This:
ps aux | grep phusion
brings back nothing
and this:
ps aux | grep passenger
Only brings back the line with nginx.
If I do this:
service nginx stop
I get:
nginx: unrecognized service
and:
service nginx start
gives me:
nginx: unrecognized service
This is a CentOS machine, so if I had Nginx installed normally, this would work.
The answer is here - Issue Uploading Files from Rails app hosted on Elastic Beanstalk
You probably have /etc/cron.daily/tmpwatch removing the /tmp/passenger-standalone* files every day, and causing you all this grief.