Laravel Valet Share URL Leads to connection refused - ngrok

Been trying to share the local site using a temporary url https://xxxxx.ngrok.io running with Laravel Valet. When I run Valet version 1.1.22:
valet --version
Laravel Valet version 1.1.22
securely the ngrok url leads to a connection refused. When it is unsecure it leads to connection refused as well (404 only with valet running on the secondary machine as it wouldn't be found there). Either that or a DNS resolving issue as I mention later on.
Locally on my wifi network and on the PC is works just fine. Access logs show me this:
127.0.0.1 - [03/Oct/2016:08:57:06 +0300] "POST /server.php?doing_wp_cron=1475474226.5450510978698730468750 HTTP/1.1" 200 0
127.0.0.1 - [03/Oct/2016:08:57:07 +0300] "POST /server.php HTTP/1.1" 200 47
127.0.0.1 - [03/Oct/2016:08:59:09 +0300] "POST /server.php?doing_wp_cron=1475474348.8563120365142822265625 HTTP/1.1" 200 0
127.0.0.1 - [03/Oct/2016:08:59:10 +0300] "POST /server.php HTTP/1.1" 200 47
Still do not see an error related to refused connection in this log at ~/.valet/Log/access.log . Error logs show old errors, not related to this issue. Ngrook window in terminal shows 301 Moved permanently on the two loads I just tried. Ngrok status site http://localhost:4040/status showed me:
GET / HTTP/1.1
Host: site.dev
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.8 (KHTML, like Gecko) Version/9.1.3 Safari/601.7.8
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-us
X-Forwarded-For: xx.xx.xx.xxx
X-Forwarded-Proto: https
X-Original-Host: xxxxxx.ngrok.io
and then the redirect:
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=UTF-8
Location: http://mysite.dev/
Server: Caddy
Status: 301 Moved Permanently
X-Powered-By: PHP/7.0.11
X-Ua-Compatible: IE=edge
Date: Mon, 03 Oct 2016 06:14:10 GMT
Content-Length: 0
Caddyfile here for completion (generated by Valet):
import /Users/jasper/.valet/Caddy/*
:80 {
fastcgi / 127.0.0.1:9000 php {
index server.php
}
rewrite {
to /server.php?{query}
}
log /Users/jasper/.valet/Log/access.log {
rotate {
size 10
age 3
keep 1
}
}
errors {
log /Users/jasper/.valet/Log/error.log {
size 10
age 3
keep 1
}
}
}
Ngrok is running too (added after tld domain was changed to .localhost):
ps aux | grep ngrok
jasper 1260 0.0 0.2 556735952 28692 s001 S+ 10:23AM 1:27.14 /Users/jasper/.composer/vendor/laravel/valet/bin/ngrok http -host-header=rewrite site.localhost:80
root 1254 0.0 0.1 2463108 8964 s001 S+ 10:23AM 0:00.01 sudo -u jasper /Users/jasper/.composer/vendor/laravel/valet/bin/ngrok http -host-header=rewrite site.localhost:80
jasper 3557 0.0 0.0 2432804 2096 s000 S+ 2:36PM 0:00.00 grep ngrok
So it does hit the Caddy Server and the ngrok status does show that. But it then does a redirect which translates into a connection refused or DNS resolution problemns for the browsers.. So what is the issue here?

In the end I realized WordPress was creating an extra redirect using its permalink structure. So when you turn off permalinks you can share your Laravel Valet WordPress site to the outside world using Ngrok. Not the perfect solution, but at least one that works and that allows you to show your work in progress to clients running it on your local machine.

Related

Curl ends with "curl: (6) Could not resolve host: HTTP"

I've been following a blog on how to compile modsecurity with nginx, Blog. I tried to verify that everything works with creating the file /etc/nginx/conf.d/echo.conf which contains:
server {
listen localhost:8085;
location / {
default_type text/plain;
return 200 "Thank you for requesting ${request_uri}\n";
}
}
I ran the following in cmd:
sudo nginx -s reload
curl -D - http://localhost:8085 HTTP/1.1 200 OK
and I got
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Wed, 10 Jun 2020 19:31:08 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive
Thank you for requesting /
curl: (6) Could not resolve host: HTTP
I have been on this for hours and can't figure out what to do. The two solutions I've found were
IPv6 enabled
Wrong DNS server
I ran the command in cmd with --ipv4 curl --ipv4 -D - http://localhost:8085 HTTP/1.1 200 OK with no success.
I also changed the nameserver in /etc/resolv.conf to 8.8.8.8 instead of 127.0.0.53 which also didn't work.
Any clues on what to do?
That error message spawns due to the command syntax you used. When using curl it should be enough by running:
curl -D - http://localhost:8085
To make a HTTP request to the webserver you define (localhost in this case). Otherwise it will take additional arguments as extra URLs to query if there are not additional options to parse, so it is trying to query HTTP as if you typed http://HTTP, which simply will not work, at least until you define a specific entry for HTTP host in your /etc/hosts for example.

wget says 406 Not acceptable

I have a simple file on my web server, and when I request it in a browser, it loads without problems:
http://example.server/report.php
But when I request the file with wget from a Raspberry Pi, I get this:
$ wget -d --spider http://example.server/report.php
Setting --spider (spider) to 1
DEBUG output created by Wget 1.18 on linux-gnueabihf.
Reading HSTS entries from /home/pi/.wget-hsts
URI encoding = 'ANSI_X3.4-1968'
converted 'http://example.server/report.php' (ANSI_X3.4-1968) -> 'http://example.server/report.php' (UTF-8)
Converted file name 'report.php' (UTF-8) -> 'report.php' (ANSI_X3.4-1968)
Spider mode enabled. Check if remote file exists.
--2018-06-03 07:29:29-- http://example.server/report.php
Resolving example.server (example.server)... 49.132.206.71
Caching example.server => 49.132.206.71
Connecting to example.server (example.server)|49.132.206.71|:80... connected.
Created socket 3.
Releasing 0x00832548 (new refcount 1).
---request begin---
HEAD /report.php HTTP/1.1
User-Agent: Wget/1.18 (linux-gnueabihf)
Accept: */*
Accept-Encoding: identity
Host: example.server
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 406 Not Acceptable
Date: Fri, 15 Jun 2018 08:25:17 GMT
Server: Apache
Keep-Alive: timeout=3, max=200
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
---response end---
406 Not Acceptable
Registered socket 3 for persistent reuse.
URI content encoding = 'iso-8859-1'
Remote file does not exist -- broken link!!!
I read somewhere that it might be an encoding problem, so I tried
$ wget -d --spider --header="Accept-encoding: *" http://example.server/report.php
but that gives me the exact same error.
That's because the server you're connecting to serves only to certain User-Agents.
Change the user agent and it works fine:
wget -d --user-agent="Mozilla/5.0 (Windows NT x.y; rv:10.0) Gecko/20100101 Firefox/10.0" http://example.server/report.php

Cache Clearing Not Working

I am at my whits end. I have just managed to set up a DNS change for a wordpress managed site, it was working fine I was in the back-end and then I keep getting this blue webpage with this URL (http://81.21.76.62/index.html?domain=globalone.org.uk) when I try and go back to it. I have called the domain company and the wordpress hosting company and both of them can see the site as it should be, I have tried on another computer and it works fine. I have tried on Chrome, Firefox and IE and have cleared all of the caches on these multiple times, I have restarted the browser AND the computer but it's still not working. Please can anyone help?
DNS can be and usually is cached by local ISP and home routers for zone Refresh time, which is usually several hours (for domain in your link it is 4 hours, at the time of writing)
Before that cache expires - you can check the site by manually resolving on authoritative nameserver and injecting the ip:
% host globalone.org.uk ns.123-reg.co.uk
Using domain server:
Name: ns.123-reg.co.uk
Address: 212.67.202.2#53
Aliases:
globalone.org.uk has address 160.153.136.1
% curl -v 160.153.136.1 -H 'Host: globalone.org.uk'
* Trying 160.153.136.1...
* Connected to 160.153.136.1 (160.153.136.1) port 80 (#0)
> GET / HTTP/1.1
> Host: globalone.org.uk
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< X-Pingback: http://www.globalone.org.uk/xmlrpc.php
< Content-Type: text/html; charset=UTF-8
< X-Port: port_10444
< X-Cacheable: YES:Forced
< Location: http://www.globalone.org.uk/
< Transfer-Encoding: chunked
< Date: Mon, 04 Apr 2016 16:59:53 GMT
< Age: 0
< Vary: User-Agent
< X-Cache: uncached
< X-Cache-Hit: MISS
< X-Backend: all_requests
(the above shows that your wordpress is working fine, but you have to wait for DNS to update)

Nagios check_http gives 'HTTP/1.0 503 Service Unavailable' for HAProxy site

Can't figure this one out!
OS: CentOS 6.6 (Up-To-Date)
I get the following 503 error when using my nagios check_http check (or curl) to query an SSL site served via HAProxy 1.5.
[root#nagios ~]# /usr/local/nagios/libexec/check_http -v -H example.com -S1
GET / HTTP/1.1
User-Agent: check_http/v2.0 (nagios-plugins 2.0)
Connection: close
Host: example.com
https://example.com:443/ is 212 characters
STATUS: HTTP/1.0 503 Service Unavailable
**** HEADER ****
Cache-Control: no-cache
Connection: close
Content-Type: text/html
**** CONTENT ****
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
HTTP CRITICAL: HTTP/1.0 503 Service Unavailable - 212 bytes in 1.076 second response time |time=1.075766s;;;0.000000 size=212B;;;0
[root#nagios ~]# curl -I https://example.com
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
However. I can access the site fine via any browser fine (200 OK), and also curl -I https://example.com from another server:
root#localhost:~# curl -I https://example.com
HTTP/1.1 200 OK
Date: Wed, 18 Feb 2015 14:36:51 GMT
Server: Apache/2.4.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Last-Modified: Wed, 18 Feb 2015 14:36:52 GMT
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=31536000;
The HAProxy server is runnning on pfSense 2.2.
I see that HAProxy returns an HTTP/1.0 for nagios and HTTP/1.1 from elsewhere. So is it my check_http' plugin causing this or is itcurl`?
Is my server just not sending the HOST header? If so, how can I resolve this?
What check_http does is it checks whether there exists a index.html-file on the server. This means you might have http running and working, while the check still fails.
Regardless whether or not creating an index.html file on the server resolves the issue, u might not want to create the circumstances such that the check works.
I suppose setting up a check for pinging your example.com and a check via nrpe to see whether your http-service is running will meet your requirements.
check_http has an option called --sni
You need to use that option

Chrome MULTIPLE_CONTENT_LENGTH error

If I access my page directly, I get:
$ wget http://localhost:8010/ --save-headers -O -
--2010-10-29 18:30:24-- http://localhost:8010/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8010... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Date: Fri, 29 Oct 2010 16:30:24 GMT
Connection: keep-alive
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Length: 950
Content-Type: text/html; charset=utf-8
Content-Language: en-us
If I access it via the cache:
$ wget http://localhost:8000/ --save-headers -O -
--2010-10-29 18:30:31-- http://localhost:8000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Type: text/html; charset=utf-8
Content-Language: en-us
Content-Length: 950
Date: Fri, 29 Oct 2010 16:30:31 GMT
X-Varnish: 818233557
Age: 0
Via: 1.1 varnish
Connection: keep-alive
When I open the latter in Chromium (8.0.552.18 (0)), I get this error:
Error 346 (net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_LENGTH): Unknown error.
I only see three extra headers; which one should I remove to make it display in Chrome?
EDIT: I have eventually got rid of this problem, but I can't remember how, and I don't have access to that system anymore. I'm starting a bounty, maybe somebody will explain me what was going on here.
Check out this version of the chromium source. It looks like if you do not specify "Transfer-Encoding" and you include multiple lengths it will throw this very error. Later revisions added a check that the content length sizes must be different to throw this error. Seems like it was added as a security precaution.
Probably would not have ever seen this error with a newer version of Chromium.
You might try disabling the DNS prefetching in the Chromium settings. Go to Preferences > Under the Hood and un-check "Use DNS pre-fetching to improve page load times".

Resources