I have installed nginx via puphpet and i am using laravel 4.1 with centos6.4. Laravael needs php apc module which i have included on puphpet congig.yaml file. after i do a vagrant up and I go to my site I get: connect() to unix:/var/run/php5-fpm.sock failed (111: Connection refused) while connecting to upstream error. I changed my nginx $fastcgi_pass = "/var/run/php5-fpm.sock" which didn't work. Then i did vagrant ssh and I ran service php-fpm restart after that it works. But I don't want to configure anything after i run vagrant up thats the purpose of puppet. Now my question is any way i can restart php-fpm when i do vagrant up or any other way to solve php apc problem. thanks in advance.
solution: after hours of researching i am able to solve the problem. I added this code:
exec { "restart php-fpm":
command => "service php-fpm restart"
}
in mainfeast.pp at the end of the php-fpm class. for me the line number is 485 or after the service.
I'd much rather you submit an issue via github # https://github.com/puphpet/puphpet/issues
That said, you can run any arbitrary code on $ vagrant up and $ vagrant provision via the exec-once and exec-always features mentioned on the frontpage.
That also said, this is a bug I'd love to fix, please submit a ticket!
Related
An error occurred.
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
Faithfully yours, nginx.
I rebooted the server just before happening this using sudo reboot now since it is an ubuntu server.
I checked this .I did not understand any of it unfortunately. Also I cant access any of the pages in the specific site of mine which is showing this error.
The error.log is empty on the location /var/log/nginx
error.log.1 has some logs which happens months ago. Same for access.log.1
I am not a network guy.
sudo service nginx status
sudo service nginx start
could be that for some reason nginx not started after reboot
I recently installed Ghost 1.8.4 and Nginx on my AWS ec2 Ubuntu 16.04 server. When I loaded my blog site, it correctly took me to the Ghost home page, from where I logged into Ghost admin. On the admin screen, there was a message to update.
I ran ghost update in putty
The update appeared to be successful, but when I returned to my blog site, I received the following error:
502 Bad Gateway
nginx/1.10.3 (Ubuntu)
Does anyone know a probably cause of this error and how to resolve?
I checked some posts, which suggested I should have turned Ghost off before the update. If this is true, is my ghost installation now corrupted?
I went to my ghost directory in /var/www/ghost and tried to run:
sudo service ghost start
but it returned:
Failed to start ghost.service: Unit ghost.service not found
and trying to stop, returns Unit ghost.service not loaded. Am I running the command from the correct location?
I've experienced 502 issues with ghost behind nginx several times over a few years of running it. I'm not sure if the cause of mine today is the same as yours, but what I observed was that after a restart ghost had changed its port number to one different than what its nginx config was listening on.
I followed these directions from https://web.archive.org/web/20200807095031/https://www.danwalker.com/running-ghost-on-a-5-digital-ocean-vps/ which resolved it for me:
See which port ghost is running on:
sudo netstat -plotn
Check that it matches the proxy_pass in the nginx config file in /etc/nginx/sites-enabled.
In my case the port in the nginx config had incremented to 2369 while the actual node process was running on 2368. Changing the proxy_pass port back to 2368 in my ghost blog's nginx config file resolved the issue for me.
I ran into the same problem after upgrading ghost.
Make sure the port number configured in your ghost's config file and the proxy_pass in your ghost site's nginx configuration files match.
Check the port number in
/var/www/ghost/config.production.json matches the proxy_pass port in the nginx config files.
/var/www/ghost/system/files/<yourDomainName>.<extension>.conf
/var/www/ghost/system/files/<yourDomainName>.<extension>-ssl.conf
In my case I had to change 2368 to 2369 in the nginx config files to fix the issue.
Make sure you restart your ghost and nginx after you make the changes.
# restart your ghost site
cd /var/www/ghost/
ghost restart
# restart nginx
sudo systemctl restart nginx
Hope this helps someone.
Apparently when I posted this issue it was due to a bug in the Ghost CLI that the ghost team were in the process of fixing.
They provided me with these instructions to run on my server:
systemctl stop ghost_www-blogwebsite-com
ghost update --force
The resulting output:
stopping Ghost [skipped]
Removing old Ghost versions [skipped]
This fixed the problem and updated to the correct version.
I have OSX El Capitan. I installed Nginx-Full via homebrew. I am supposed to be able to start and stop services with
brew services Nginx-Full Start
I run that command and it seems to start no problem. I check the running services with
brew services list
That indicates that the Nginx-Full services is running. When i run
htop
to look at everything that is running Nginx does not show up and the server is not handling requests.
nginx is failing to launch because of an error, but brew-services is not communicating that to you.
Running it with sudo, as other users have suggested, is just masking the problem. If you just run nginx directly, you may see that there is actually a configuration or permissions issue that is causing nginx to abort. In my case, it was because it couldn't write to the error log:
nginx: [alert] could not open error log file: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
2020/04/02 13:11:53 [warn] 19989#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/nginx.conf:2
2020/04/02 13:11:53 [emerg] 19989#0: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
The last error is causing nginx to fail to launch. You can make yourself the owner of the logs with:
sudo chown -R $(whoami) /usr/local/var/log/nginx/
This should cause subsequent config errors to be written to the error log, even if homebrew services is not reporting them in stderr/stdout for now.
I've opened an issue about this: https://github.com/Homebrew/homebrew-services/issues/215
The log path may not the same for everyone. You can check the path to log file by checking the config file /usr/local/etc/nginx/nginx.conf. You can find a line like:
error_log /Users/myusername/somepath/nginx.log;. Change the chown command above accordingly. If even this didn't solve the problem, you may have to do the same for any other log files specified in the server blocks in your nginx configuration
Try launching it with "sudo", even if the formula say
The default port has been set in /usr/local/etc/nginx/nginx.conf to 8080 so that
nginx can run without sudo.
sudo brew services Nginx-Full start
this worked for me:
sudo brew services start nginx
Running sudo nginx worked for me, it initially gave some error stating certain file in certain directory is missing, creating that file, and then another file is asked for to be created and then it runs properly.
I had similar problem, running it brew services start nginx used to show nginx running.
but brew services list used to show error.
running with sudo nginx solved my issue
I have Gitlab 8.6 running on an Ubuntu 14.04 server that seems to have gotten messed up. I consistently get a 502 error when accessing the site. The server likely has not been restarted since installing Gitlab initially, and a power outage caused the server to reboot. Now, I cannot start/restart Gitlab due to what appears to be port conflicts.
I installed Gitlab via source, I don't have any custom port configurations, and am using NGINX. nginx -t shows that the configuration appears to be correct syntax-wise.
When I run netstat -tupln, I see that Unicorn & a Gitlab instance is already running on :8080 and :80 respectively at boot up. I suspect that a 2nd instance of Gitlab was installed which is being run at boot and that is causing the proper instance to have port conflicts when I try to run it via service gitlab restart. I'm not even sure if that's possible, but I can't seem to figure out where to go from here. Every time I run sudo gitlab-ctl reconfigure or service gitlab start, it fails and the unicorn.stderror.log shows bind errors to the :8080 port. I tried moving the Unicorn service to :8081 as well, but I still receive the port binding error.
Does anyone know how I can detect if there are multiple Gitlab instances running, and maybe if there is a way to remove a duplicated one if it's possible? Thank you!
EDIT: Here is what is in the /etc/gitlab/gitlab.rb file. Everything else is commented out.
## Url on which GitLab will be reachable
external_url 'http://my-gitlab-instance.domain.com'
EDIT 2: My /home/git/gitlab/ directory is mapped to https://gitlab.com/gitlab-org/gitlab-ce.git, and is on the 8-7-stable branch. gitlab-shell and gitlab-workhorse are on the correct versions according to https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/8.6-to-8.7.md
EDIT 3: I have gotten to a point where the Gitlab seems to self-check okay by removing the gitlab-ce package (https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135), but the server returns a 404. NGINX, Unicorn, Sidekiq, and gitlab-workhorse all say that they're running. I see that unicorn.rb is listening on :8080, and nginx is listening on 0.0.0.0:80 and :::80. I guess now I'm troubleshooting this 404 and hopefully I will be back to my install-from-source.
What I have found is that there were 2 issues causing the errors I had.
First, I removed a "gitlab-ce" package that was installed, following the instructions here: https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135. For some reason, when I restart the machine now I have to restart these services, in order, for Gitlab to run properly redis-server, gitlab, nginx. However, Gitlab does start responding properly after that.
Second, the 404 error was due to a different server that was also listening on that IP address, causing a conflict.
I will likely move to using the omnibus package on a fresh, new server going forward, but at least the immediate issues appear resolved. Thanks for your help, SLY!
I am trying to access etcd from within a running Docker container. When I run
curl http://172.17.42.1:4001/v2/keys
I get
curl: (7) Failed to connect to 172.17.42.1 port 4001: Connection refused
I have four other hosts where this works fine, but every container on this machine has this problem. I'm really at a loss as to what's going on, and I don't know how to debug it.
My etcd environment variables are
ETCD_ADVERTISE_CLIENT_URLS=http://10.242.10.2:2379
ETCD_DISCOVERY=https://discovery.etcd.io/<token_removed>
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.242.10.2:2380
ETCD_LISTEN_CLIENT_URLS=http://10.242.10.2:2379,http://127.0.0.1:2379,http://0.0.0.0:4001
ETCD_LISTEN_PEER_URLS=http://10.242.10.2:2380
I can also access etcd from the host with
curl http://localhost:4001/v2/keys
So there seems to be some error when routing from the container out to the host. But I can't figure out what it is. Can anyone point me in the right direction?
I observed I had to use the --advertise-client-urls and --listen-client-urls. Like so:
./etcd --advertise-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001' --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
Then I was able to successfully do
curl -L http://hostname:2379/version
from any machine that could reach that server and it worked.
It turns out etcd was only listening on localhost:4001 on that machine, which is why I couldn't access it from within a container. This is despite me configuring one of the listen client urls to 0.0.0.0:4001.
It turns out that I had run sudo systemctl enable etcd2, which caused it to run before the cloud-config service ran. As such, etcd started with default configuration instead of the one that I had specified in my cloud config. Running sudo systemctl disable etcd2 fixed the issue.