Laravel Valet sites connection refused on port 80 - nginx

Ever since Chrome and Safari started forcing HTTPS redirection on the .dev TLD I've been getting issues connecting to my Laravel Valet sites.
Without knowing that Chrome and Safari had done this, I updated Valet to from 1.2 and not thinking to check in Firefox first (doh! 😖). I've installed the latest stable release of Laravel Valet (2.0.6) on High Sierra (10.13.2), completely removed the running version of homebrew/php/php70 and reinstalled to be sure and have now changed the TLD to .test.
I can ping the domain without suffering any packet loss, however when I run
curl oldabp.test --verbose
I get
* Rebuilt URL to: oldabp.test/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connection failed
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to oldabp.test port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to oldabp.test port 80: Connection refused
I've also cleared the dns cache using the command found in this article, restarted valet and rebooted. I retested after each of these steps and the output remains the same.

I found a stack overflow answer for a similar question here that did the trick. During the valet install step I noticed it installed and started nginx and I don't remember it restarting nginx in the numerous times I've called valet restart before despite it telling me that it had restarted successfully. It boils down to very odd behaviour which has been resolved.

Related

Chrome Only ERR_SSL_PROTOCOL_ERROR

One of the sites I manage started having a problem recently where if you try to access it via chrome (and apperently opera, and new edge) I get the ERR_SSL_PROTOCOL_ERROR error page. And unfortunantly there is not further information I have been able to find about what the underlying error actually is.
All of the suggestions elsewhere list things wrong on the client side except server clock skew - which existed but was fixed with no effect on the error.
running echo | openssl s_client -servername <site_url> -connect <site_url>:443 on mac gives a successful tlsv1.2 response.
running https https://<site_url> gives a successful HTTP response.
The site works in IE11, Firefox, Edge (old) as well as with openssl commandline.
The site is wordpress running on Ubuntu 14.04 (openssl version 1.0.1f) with PHP 5.6 and nginx 1.6.2
As I said above - I have already corrected the ~3 minute clock skew on the server that could cause ssl failure.
Any help is appreciated.
--Update--
I get the following protocol vulnuablities when checking on SSLLab
SSL labs protocol section

Not showing pages of WordPress installed site on localhost

I have installed three WordPress sites on a WAMP localhost:
localhost/pacific_ocean
localhost/surfing_new
localhost/construction
Everything was working yesterday. Today, when I go to the above url's, the screen is blank white, and the WordPress installed site is not showing.
1.) I have checked options.php and it is pointing the site url is fine: http://localhost/pacific_ocean/
2.) I have run 'netstat -abno' on an administrator command prompt and it shows that [httpd.exe] is running on port 80
Then I ran httpd.exe -e warn and here's what I got:
(OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : AH00072: make_sock: could not bind to address 0.0.0.0:80
AH00451: no listening sockets available, shutting down
AH00015: Unable to open logs
Is httpd.exe getting in the way of Apache running or is this another issue?
One of the sites shows /index of folders, but that's it.
Can someone help me?
Thanks!

Gitlab stop working after instal iRedMail

I have a little problem.
I have VPS with debian 8. I have installed on it apache2 serwer and gitlab ce with default settings.
My serwer work on port 80 and my gitlab worked on 81 ( external_url ).
Everything stop working when i installed iRedMail.
Now I have uninstalled iRedMail but gitlab doesn't work.
After use command "gitlab-ctl restart" all packages are ok (gitlab-workhouse, logrotate, nginx, postgresql, redis, sidekiq, unicorn), but If i try to open site with gitlab, I have got error the page timeout.
What and how should check to solve this problem?
I have to add that I hace checked listen ports and nginx are in this list.
tcp 0 0 0.0.0.0:81 0.0.0.0:* LISTEN 4534/nginx
---------- EDIT
"stop working" = after I type "external_url" (which worked) I got "Connection timed out".
I have checked logs in /var/log/gitlab for
gitlab-rails, gitlab-shell, gitlab-workhorse, nginx etc.
In gitlab-rails folder I found files : production.log (clear) and sidekiq.log. There are many of ERRORS ( I paste 2 first line, next are similar ):
2015-12-14_14:57:44.37657 2015-12-14T14:57:44.376Z 14796 TID-daijk ERROR: Error fetching message: No such file or directory - connect(2) for /var/opt/gitlab/redis/redis.socket
2015-12-14_14:57:44.37672 2015-12-14T14:57:44.376Z 14796 TID-daijk ERROR: /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/redis-3.2.1/lib/redis/connection/ruby.rb:180:in `connect_nonblock'
I Have to add that my apache server works fine on :80.

OpenStack - Web console connection refused

Just getting started with OpenStack.
got everything set up on a Ubuntu VM (under Parallels).
When I attempt to log into the browser console as admin (the password was set during the DevStack install) - I get:
HTTPConnectionPool(host='10.211.55.16', port=8774): Max retries exceeded with url: /v2/a586870bde4c4dfc993dc40cab8047b7/extensions (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
I am however able to run CLI commands such as keystone-tenant-list, and all others, on the actual server.
I made sure that I'm able to ping the virtual Ubuntu host from my Mac. When I first enter http://myhost.mydomain I do get a login page, but, as soon as I enter admin's credentials - I get this ugly (and super long error)
What things could I check to fix this?
Resolution:
1) Wiped clean my Ubuntu host
2) Followed set-by-step instructions here: http://www.stackgeek.com/guides/gettingstarted.html
Everything now works without a glitch.

vagrant puphpet nginx 502 bad gateway error

I have installed nginx via puphpet and i am using laravel 4.1 with centos6.4. Laravael needs php apc module which i have included on puphpet congig.yaml file. after i do a vagrant up and I go to my site I get: connect() to unix:/var/run/php5-fpm.sock failed (111: Connection refused) while connecting to upstream error. I changed my nginx $fastcgi_pass = "/var/run/php5-fpm.sock" which didn't work. Then i did vagrant ssh and I ran service php-fpm restart after that it works. But I don't want to configure anything after i run vagrant up thats the purpose of puppet. Now my question is any way i can restart php-fpm when i do vagrant up or any other way to solve php apc problem. thanks in advance.
solution: after hours of researching i am able to solve the problem. I added this code:
exec { "restart php-fpm":
command => "service php-fpm restart"
}
in mainfeast.pp at the end of the php-fpm class. for me the line number is 485 or after the service.
I'd much rather you submit an issue via github # https://github.com/puphpet/puphpet/issues
That said, you can run any arbitrary code on $ vagrant up and $ vagrant provision via the exec-once and exec-always features mentioned on the frontpage.
That also said, this is a bug I'd love to fix, please submit a ticket!

Resources