Ngrok how would I set this up - ngrok

I am using ngrok
I have a IFTTT set to send a url requst to ngrok
what I need help with is how would I also includ this ((/?action=[Speak("front%20door%20open")]&key=ABC1234))
IFTTT Settings:
https://99f0-172-56-7-96.ngrok.io/?action=[Speak("front%20door%20open")]&key=ABC1234
ngrok settings:
Session Status online
Account dh********so#gmail.com (Plan: Free)
Version 3.1.0
Region United States (us)
Latency 75ms
Web Interface http://localhost:54657
Forwarding https://99f0-172-56-7-96.ngrok.io -> http://localh
Connections ttl opn rt1 rt5 p50 p90
1 0 0.00 0.00 0.00 0.00
HTTP Requests
PUT /
If there is another info that i need to add please let me know

Related

running ngrok http 80 gets a black screen and I am not able to get the external url to connect to

I started to use ngrok to create a tunnel to get http 80 access to some local files.
It was working great till tonight.
When I run ngrok http 80 I get the usual startup screen for about 30 seconds then a black screen comes up and i am unable to get my external link.
This will load then the terminal window goes blank
ngrok (Ctrl+C to quit)
Session Status connecting
Version 3.1.0
Latency -
Web Interface http://127.0.0.1:4040
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
# ngrok http 80 --log stdout
INFO[11-05|09:28:17] no configuration paths supplied
INFO[11-05|09:28:17] using configuration at default config path path=/root/.config/ngrok/ngrok.yml
INFO[11-05|09:28:17] open config file path=/root/.config/ngrok/ngrok.yml err=nil
t=2022-11-05T09:28:17-0400 lvl=info msg="starting web service" obj=web addr=127.0.0.1:4040
t=2022-11-05T09:28:22-0400 lvl=warn msg="failed to check for update" obj=updater err="Post \"https://update.equinox.io/check\": context deadline exceeded"
panic: send on closed channel
goroutine 48 [running]:
go.ngrok.com/lib/tlsx.CRLVerifyConfig.verifyIssuer.func1()
go.ngrok.com/lib/tlsx/crl.go:104 +0xf5
go.ngrok.com/lib/nsync.(*Group).Go.func1()
go.ngrok.com/lib/nsync/group.go:69 +0x44
created by go.ngrok.com/lib/nsync.(*Group).Go
go.ngrok.com/lib/nsync/group.go:68 +0x128
i did not make any changes to the ngrok config
I spoke with ngrok support and they verified that looks like my isp was blocking something.
They advised to edit the ngroc.yml file and to add
crl_noverify: true
To the the file.
After that the command "ngrok http 80" works as normal.

Chilkat HTTP with https

I'm currently using the Chilkat HTTP ActiveX control (version 9.3.2.0) with VB6... One of the servers where I download files from is switching over to https, but I can't get it to work... Using http it works perfectly, but when I change the URL to https it returns 0.
Here is the result of Http.LastErrorText:
ChilkatLog:
Download:
DllDate: Aug 5 2012
UnlockPrefix: **********
Username: BILL-DESKTOP:Bill
Architecture: Little Endian; 32-bit
Language: ActiveX
VerboseLogging: 0
backgroundThread: 0
url: https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?file=gfs.t12z.pgrb2.0p25.f000&lev_10_m_above_ground=on&lev_2_m_above_ground=on&lev_entire_atmosphere=on&lev_entire_atmosphere_%5C%28considered_as_a_single_layer%5C%29=on&lev_mean_sea_level=on&lev_surface=on&var_APCP=on&var_PRMSL=on&var_TCDC=on&var_TMP=on&var_UGRD=on&var_VGRD=on&leftlon=0&rightlon=360&toplat=90&bottomlat=-90&dir=%2Fgfs.2018120712
toLocalPath: C:\Progra~1\PCGrADS\gfs\grib\gfs_pgrbf_000.grib2
localFileAlreadyExists: 0
QuickGetToOutput_Download:
qGet_1:
simpleHttpRequest_3:
httpMethod: GET
requestUrl: https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?file=gfs.t12z.pgrb2.0p25.f000&lev_10_m_above_ground=on&lev_2_m_above_ground=on&lev_entire_atmosphere=on&lev_entire_atmosphere_%5C%28considered_as_a_single_layer%5C%29=on&lev_mean_sea_level=on&lev_surface=on&var_APCP=on&var_PRMSL=on&var_TCDC=on&var_TMP=on&var_UGRD=on&var_VGRD=on&leftlon=0&rightlon=360&toplat=90&bottomlat=-90&dir=%2Fgfs.2018120712
Connecting to web server...
httpServer: nomads.ncep.noaa.gov
port: 443
Using HTTPS.
ConnectTimeoutMs_1: 10000
calling ConnectSocket2
IPV6 enabled connect with NO heartbeat.
connectingTo: nomads.ncep.noaa.gov
dnsCacheLookup: nomads.ncep.noaa.gov
Resolving domain name (IPV4)
GetHostByNameHB_ipv4: Elapsed time: 140 millisec
myIP_1: 192.168.1.38
myPort_1: 55564
connect successful (1)
clientHelloMajorMinorVersion: 3.1
buildClientHello:
majorVersion: 3
minorVersion: 1
numRandomBytes: 32
sessionIdSize: 0
numCipherSuites: 10
numCompressionMethods: 1
--buildClientHello
TlsAlert:
level: fatal
descrip: handshake failure
--TlsAlert
Closing connection in response to fatal error.
Failed to read incoming handshake messages. (1)
Client handshake failed. (3)
Failed to connect to HTTP server.
connectElapsedMs: 640
--simpleHttpRequest_3
--qGet_1
--QuickGetToOutput_Download
bFileDeleted: 1
totalElapsedMs: 672
ContentLength: 0
Failed.
--Download
--ChilkatLog
What am I doing wrong?
Regards,
Bill
You were using an old version from 2012, which did not yet implement TLS 1.2. Chilkat has since added support for TLS 1.2 (for many years now) and the latest version should work fine.

HAProxy - Reject connection if flodding server

i want to reject the connection if the user spams the server with requests. My current config looks like this:
frontend http_front
bind *:80
log global
stick-table type ip size 1m expire 1m store gpc0,http_req_rate(10s)
# Increase gpc0 if requests in last 10s where greater than 10
acl conn_rate_abuse src_http_req_rate gt 10
acl mark_as_abuser src_inc_gpc0 gt 0
tcp-request connection track-sc1 src
# Reject if gpc0 greater than 1
tcp-request connection reject if conn_rate_abuse mark_as_abuser
default_backend http_back
The Socket- Output looks like this
0x1e455c0: key=10.23.27.55 use=0 exp=51149 gpc0=0 http_req_rate(10000)=422
What am i doing wrong?!
Edit://
With this code it works, but shouldnt it work with only the code above?
backend http_back
balance roundrobin
acl abuse src_http_req_rate(http_front) ge 10
tcp-request content reject if abuse
server test1 ip1:80 check
server test2 ip2:80 check
HA-Proxy version 1.6.4 2016/03/13

Why every request is being processed by PHP-FPM? (even if I'm using Cache)

I'm running Wordpress with: Nginx + PHP-FPM + APC + W3 Total Cache + PageSpeed.
After 3 days researching and configuring, I succeeded to configure.
Running "top" and hitting some cached pages, it shows:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13387 nginx 20 0 472m 11m 4664 S 12.3 2.0 0:46.55 nginx
17577 nginx 20 0 443m 47m 29m S 0.7 8.0 0:42.88 php-fpm
17591 nginx 20 0 438m 43m 29m S 0.7 7.2 0:42.59 php-fpm
1486 mysql 20 0 851m 21m 4832 S 0.3 3.7 1:24.71 mysqld
17907 nginx 20 0 438m 48m 34m S 0.3 8.1 0:36.78 php-fpm
18065 nginx 20 0 442m 47m 29m S 0.3 8.0 0:33.49 php-fpm
18543 nginx 20 0 445m 63m 42m S 0.3 10.6 0:22.94 php-fpm
21125 root 20 0 15012 1148 868 R 0.3 0.2 0:00.86 top
1 root 20 0 19356 1388 1136 S 0.0 0.2 0:00.74 init
1) Why every request is being processed by PHP-PFM? Shouldn't W3 Total Cache supposed to prevent the request from been processed by PHP-FPM?
I know that my page is being cached because every page return in the end of HTML:
<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/
Page Caching using disk: enhanced
2) If I install Varnish in front of Nginx, will it stop the request from being processed by PHP-FPM? (Performance will increase? I'm using a Micro Ec2, Ram = 613MB)
PS: The response header is returning "Cache-Control: max-age=0, no-cache" from server. I don't know if this influence W3 Caching.
My specs:
Amazon Micro EC2
Linux version 3.4.48-45.46.amzn1.x86_64 Red Hat 4.6.3-2 (I think it's based on CentOS 5)
PHP 5.3.26 (fpm-fcgi)
I'm not aware of how this w3 total cache works, but let me state to you some facts,
first of all on the nginx level, any php page must hit the php engine, because that's probably what your try_files tells the nginx to do, if w3 total cache has some sort of html cache of the pages, then without some changes to nginx config you will still hit the php even if the cache existed. And if the cache is not really in a form of html then probably the php engine checks for the existence of the page then decides whether to rebuild the page or serve the cached one instead, so you definitely need php to run, the difference is that it won't hit the database and it won't do any processing, just serving the cached page instead.
second question, varnish, yes varnish would actually be good, it would spare you the need of a cache plugin, but then you need to make sure that wordpress knows when to ask varnish to purge the cached pages, the structure of the server would be user -> varnish -> nginx -> php, if varnish has a cached page or assets (css,js,etc) it would serve it directly without passing the request to nginx, I've tried it on a website and the response times of a cached page definitely improved, even when i did a ctrl+f5 to request the whole page without cache it still returned very fast as if the page was just a plain html page.
You'll still need to mess around with the varnish config, or at least that's what i did, because it needs a little bit of learning, but so far all I did was some copy and pasting from blogs and stuff and it worked just fine with me.
I installed Varnish in front of my server, but again, it was being processed by PHP-FPM.
The problem was the lacking of slash at the end of the URL.
In Wordpress, a page is a directory, so it respond as www.mysite.com/page1/.
The point is when you hit www.mysite.com/page1 (without the slash), Nginx have to redirect to www.mysite.com/page1/ (with slash), and by doing this, it uses PHP-FPM.
After putting the slash at the end of all links in my site, the redirect was not done, and all of my page was not processed by PHP-FPM.

Can't connect to Wordpress SVN server to update repository

Okay, for some reason this morning, I am unable to connect to the Wordpress SVN repository and execute basic svn commands (e.g. checkout, update).
Here's an example of what's happening:
$ svn co http://svn.automattic.com/wordpress/tags/3.3/
# Adds a bunch of files...
svn: warning: Error handling externals definition for '3.3/wp-content/plugins/akismet':
svn: warning: PROPFIND of '/!svn/vcc/default': could not connect to server (http://plugins.svn.wordpress.org)
Checked out revision 19597.
$ cd 3.3
$ svn update
svn: OPTIONS of 'http://svn.automattic.com/wordpress/tags/3.3': could not connect to server (http://svn.automattic.com)
Yet, when I perform these same commands on a development server I have (a Linode VPS) it works fine.
I've google around about this quite a bit, and found pages like these:
http://vsingleton.blogspot.com/2008/04/svn-propfind-request-failed-on.html
http://wordpress.org/support/topic/cant-connect-to-the-pluginssvnwordpress-server
A lot of these articles say something to effect of, it's your proxy server. Well, I'm not behind a proxy server:
http://whatismyipaddress.com/proxy-check
Proxy server not detected.
IP 24.21.xxxx.xxx
rDNS FALSE
WIMIA Test FALSE
TOR Test FALSE
Loc Test FALSE
Header Test FALSE
DNSBL Test FALSE
Just a regular old Comcast home internet connection.
Also, I can browse the wordpress SVN repository just fine via my browser.
Anyhow, I'm sort of at a dead end here, and I guess I'm wondering if anyone has any suggestions as to how to either solve the issue or work around it? I tried setting up an forward proxy server on the Apache installation I have running on that dev server and then updating my ~/.subversion/server file, but that didn't work or I configured something wrong.
Well, if anyone has any brilliant ideas or explanations, I'd love to hear them...
Update
I had a co-worker test this out on his home connection -- he uses Comcast as well. He got the same error as I did. So it appears to be some Comcast-related issue specific to the Wordpress svn repository. I was able to checkout other public repositories via http (e.g. from Google Code) just fine.
I ran a series of tests and I could not find any hidden proxies or cache servers between me and the repository.
I did run traceroute per Lazy Badgers suggestion, and here's what I got:
$ traceroute svn.automattic.com
traceroute to svn.automattic.com (72.233.56.196), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 0.659 ms 0.292 ms 0.185 ms
2 * * *
3 te-5-7-ur01.hollywood.or.bverton.comcast.net (68.85.150.225) 8.792 ms 8.309 ms 9.054 ms
4 xe-3-1-0-0-ar03.beaverton.or.bverton.comcast.net (68.87.216.33) 14.354 ms 24.859 ms 8.753 ms
5 pos-3-8-0-0-cr01.sacramento.ca.ibone.comcast.net (68.86.95.117) 21.869 ms
pos-3-1-0-0-cr01.sacramento.ca.ibone.comcast.net (68.86.95.113) 21.791 ms
pos-3-0-0-0-cr01.sacramento.ca.ibone.comcast.net (68.86.95.109) 22.983 ms
6 pos-0-7-0-0-cr01.sanjose.ca.ibone.comcast.net (68.86.85.46) 23.682 ms 25.043 ms 24.675 ms
7 xe-10-3-0.edge1.sanjose1.level3.net (4.71.118.5) 61.048 ms 23.986 ms 24.221 ms
8 vlan80.csw3.sanjose1.level3.net (4.69.152.190) 25.257 ms 25.648 ms
vlan90.csw4.sanjose1.level3.net (4.69.152.254) 24.310 ms
9 ae-82-82.ebr2.sanjose1.level3.net (4.69.153.25) 24.870 ms
ae-92-92.ebr2.sanjose1.level3.net (4.69.153.29) 25.371 ms
ae-91-91.ebr1.sanjose1.level3.net (4.69.153.13) 24.744 ms
10 ae-34-34.ebr4.sanjose1.level3.net (4.69.153.34) 36.011 ms 25.975 ms 36.053 ms
11 ae-5-5.ebr2.sanjose5.level3.net (4.69.148.141) 25.236 ms 25.307 ms 25.305 ms
12 ae-6-6.ebr2.losangeles1.level3.net (4.69.148.201) 31.299 ms 34.076 ms 33.401 ms
13 ae-3-3.ebr3.dallas1.level3.net (4.69.132.78) 59.012 ms 58.604 ms 60.576 ms
14 ae-83-83.csw3.dallas1.level3.net (4.69.151.157) 59.708 ms 65.724 ms
ae-73-73.csw2.dallas1.level3.net (4.69.151.145) 60.383 ms
15 ae-42-90.car2.dallas1.level3.net (4.69.145.196) 60.636 ms
ae-22-70.car2.dallas1.level3.net (4.69.145.68) 59.572 ms 59.758 ms
16 databank-ho.car2.dallas1.level3.net (4.71.170.2) 58.711 ms 59.994 ms 60.561 ms
I don't know if that's unusual or anything. I tried the same on my dev sever and the result looked mostly similar, save for line 2 with the * * *.
I successfully configured a forward proxy on my dev server so I've hacked together a solution for now, but I still don't quite understand what is afoot...
Update 2
In response to a question, here's how I configured things to use my dev server as a proxy for the time being.
First, I configured apache on my dev server to run as a proxy. Make sure these directives are somewhere in your Apache configuration file chain (httpd.conf, vhosts.d directory, etc.):
Listen 8080
<VirtualHost _default_:8080>
ProxyRequests On
ProxyVia On
ProxyPreserveHost On
<Proxy *>
Order deny,allow
Deny from all
Allow from xxx.xxx.xxx.xxx
</Proxy>
</VirtualHost>
This assumes you have a working Apache set up on a development server somewhere (I would definitely not use this on a production server) with mod_proxy installed. Port 8080 is arbitrary. Basically for an unmatched virtual host (i.e. any request that doesn't match your other hosts you have set up) it will turn proxy on and proxy the request through. Change "xxx.xxx.xxx.xxx" to your own IP address.
Now you have to change the server setting in your subversion config file.
In this file:
~/.subversion/servers
Find this section:
[global]
# http-proxy-exceptions = *.exception.com, www.internal-site.org
# http-proxy-host = proxy1.some-domain-name.com
# http-proxy-port = 80
# http-proxy-username = defaultusername
# http-proxy-password = defaultpassword
# http-compression = no
# http-auth-types = basic;digest;negotiate
# No http-timeout, so just use the builtin default.
# No neon-debug-mask, so neon debugging is disabled.
# ssl-authority-files = /path/to/CAcert.pem;/path/to/CAcert2.pem
Uncomment out http-proxy-host and http-proxy-port. For host use a spare domain name you have mapped to your development server or presumably you could just use your server IP. Then set the port to 8080 or whatever you used.
This should route all subversion http requests via your proxy you just set up. It doesn't affect svn or svn+ssh requests.
This was my quick hack, your mileage may vary, this might be totally insecure or broken, etc.
I have Comcast business both at my home office & the corporate office.
BOTH FAIL TO CONNECT TO THE REPO ON COMCAST.
However, I never have a problem if I go over the Windstream T1 or connect via our live server on multiple backbones.
Comcast appears to be "traffic shaping" and/or monitoring business class traffic and breaking the Internet!
Nice job Comcast!
If you don't have an alternate connection they you may need to use a proxy service and then send Comcast a nasty email about their network filtering.

Resources