k6 redirects localhost to loopback - k6

I'm on a Mac and I'm attempting to run my k6 script against http://localhost:4200 (angular app) locally.
The angular app is running and I can access it via the browser and using curl.
My k6 script has the base URL set to http://localhost:4200. However, all requests are being made to http://127.0.0.1:4200 instead which is denied by MacOS.
How do I force k6 to NOT rewrite localhost to the loopback address?
EDIT
Adding various outputs of curl -vv.
localhost:4200
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 4200 (#0)
> GET / HTTP/1.1
> Host: localhost:4200
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Content-Type: text/html; charset=utf-8
< Accept-Ranges: bytes
< Content-Length: 942
< ETag: W/"3ae-UQojFJZul+b6hEhgbvnN6wFCVuA"
< Date: Thu, 20 Jan 2022 21:38:55 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
<
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>MyApp</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<script src="assets/scripts/apm.js"></script>
<link rel="apple-touch-icon" sizes="180x180" href="/assets/images/apple-touch-icon.png">
<link rel="icon" type="image/x-icon" href="favicon.ico">
<link rel="icon" type="image/png" sizes="32x32" href="/assets/images/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/assets/images/favicon-16x16.png">
<link rel="manifest" href="/site.webmanifest">
<link rel="stylesheet" href="styles.css"></head>
<body>
<app-root></app-root>
<script src="runtime.js" type="module"></script><script src="polyfills.js" type="module"></script><script src="styles.js" defer></script><script src="vendor.js" type="module"></script><script src="main.js" type="module"></script></body>
</html>
* Connection #0 to host localhost left intact
* Closing connection 0
127.0.0.1:4200
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connection failed
* connect to 127.0.0.1 port 4200 failed: Connection refused
* Failed to connect to 127.0.0.1 port 4200: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 4200: Connection refused
EDIT 2
Hosts file
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal

There is no application listening on port 4200 for your IPv4 address 127.0.0.1. 127.0.0.1 is the IPv4 loopback address. When k6 makes a request to localhost, this hostname resolves to the IPv4 127.0.0.1.
However, your application seems to be listening on port 4200 for your IPv6 address ::1. ::1 is the IPv6 loopback address. curl resolves the hostname localhost to its IPv6 address.
How are you binding your application to the port? Usually, when binding to all interfaces of a host, you'd use the special IP address 0.0.0.0.
I see a potential solutions:
Make your application bind to IPv4 and IPv6, usually done by binding to address 0.0.0.0.
Change your k6 script to connect to IPv6 ::1 directly
Specify --dns "policy=preferIPv6" or add dns:{policy:"preferIPv6"} to your options (since 0.29.0)
Disable IPv6 in your OS. This is a drastic change and I wouldn't recommend it
Change your hosts file to resolve localhost to the IPv4 address

Related

nginx: behavior of Expect: 100-continue with HTTP redirect

I've been facing some issues with nginx and PUT redirects:
Let's say I have an HTTP service sitting behind an nginx server (assume HTTP 1.1)
The client does a PUT /my/api with Expect: 100-continue.
My service is not sending a 100-continue, but sends a 307 redirect instead, to another endpoint (in this case, S3).
However, nginx is for some unknown reason sending a 100-continue prior to serving the redirect - the client proceeds to upload the whole body to nginx before the redirect is served. This causes the client to effectively transfer the body twice - which isn't great for multi-gigabyte uploads
I am wondering if there is a way to:
Prevent nginx to send 100-continue unless the service actually does send that.
Allow requests with arbitrarily large Content-Length without having to set client_max_body_size to a large value (to avoid 413 Entity too large).
Since my service is sending redirects only and never sending 100-Continue, the request body is never supposed to reach nginx. Having to set client_max_body_size and waiting for nginx to buffer the whole body just to serve a redirect is quite suboptimal.
I've been able to do that with Apache, but not with nginx. Apache used to have the same behavior before this got fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 - wondering if nginx has the same issue
Any pointers appreciated :)
EDIT 1: Here's a sample setup to reproduce the issue:
An nginx listening on port 80, forwarding to localhost on port 9999
A simple HTTP server listening on port 9999, that always returns redirects on PUTs
nginx.conf
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
server {
listen 80;
server_name frontend;
keepalive_timeout 75s;
keepalive_requests 100;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9999/;
}
}
}
I'm running the above with
docker run --rm --name nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.21.1
Simple python3 HTTP server.
#!/usr/bin/env python3
import sys
from http.server import HTTPServer, BaseHTTPRequestHandler
class Redirect(BaseHTTPRequestHandler):
def do_PUT(self):
self.send_response(307)
self.send_header('Location', 'https://s3.amazonaws.com/test')
self.end_headers()
HTTPServer(("", 9999), Redirect).serve_forever()
Test results:
Uploading directly to the python server works as expected. The python server does not send a 100-continue on PUTs - it will directly send a 307 redirect before seeing the body.
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:9999/test
> PUT /test HTTP/1.1
> Host: 127.0.0.1:9999
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 307 Temporary Redirect
< Server: BaseHTTP/0.6 Python/3.9.2
< Date: Thu, 15 Jul 2021 10:16:44 GMT
< Location: https://s3.amazonaws.com/test
<
* Closing connection 0
* Issue another request to this URL: 'https://s3.amazonaws.com/test'
* Trying 52.216.129.157:443...
* Connected to s3.amazonaws.com (52.216.129.157) port 443 (#1)
> PUT /test HTTP/1.0
> Host: s3.amazonaws.com
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
>
Doing the same thing through nginx fails with 413 Entity too large - even though the body should not go through nginx.
After adding client_max_body_size 1G; to the config, the result is different, except nginx tries to buffer the whole body:
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:80/test
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> PUT /test HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
} [65536 bytes data]
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Thu, 15 Jul 2021 10:22:08 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
{ [157 bytes data]
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
Notice how nginx sends a HTTP/1.1 100 Continue
With this simple python server, the request subsequently fails because the python server closes the connection right after serving the redirect, which causes nginx to serve the 502 due to a broken pipe:
127.0.0.1 - - [15/Jul/2021:10:22:08 +0000] "PUT /test HTTP/1.1" 502 182 "-" "curl/7.74.0"
2021/07/15 10:22:08 [error] 31#31: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: frontend, request: "PUT /test HTTP/1.1", upstream: "http://127.0.0.1:9999/test", host: "127.0.0.1"
So as far as I can see, this seems exactly like the following Apache issue https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 (which is now addressed in newer versions). I am not sure how to circumvent this with nginx

Wordpress on AWS ELB errors 302

I am in the process of moving my EC2 web hosting environment to ELB. Static webpages work perfectly, but Wordpress sites (multisite) loops with 302.
Apache log reports that "GET /" but the hosting folder for Wordpress is "GET /wp/".
See curl:
curl -v -k -H "Host: example.com" myELB.eu-west-1.elb.amazonaws.com/
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 301
< date: Wed, 03 Jun 2020 09:13:12 GMT
< content-type: text/html; charset=UTF-8
< content-length: 0
< location: https://example.com/
< server: Apache/2.4.29 (Ubuntu)
< x-redirect-by: WordPress
<
* Connection #0 to host myELB.eu-west-1.elb.amazonaws.com/ left intact
* Closing connection 0
Any suggestions?
Turns out ELB communicates via port 80 to EC2. All I had to do was disable "Force SSL" on Wordpress and it worked (in my case it was a plugin).

Based on country code of http_cookie preference rewrite to the appropriate sites on Nginx

How to route based on cookies preferred by the end user?
We have Nginx/1.17.10 running as pod in AKS. eCommerce site is hosted on this.
CloudFlare is front end acting as DNS and WAF.
CloudFlare have GeoIP turned on , so we have parameter - $http_cf_ipcountry to trace the country code. however we are looking for the preference saved by end user and route to that specific region.
Example:
If $http_cookie --> COUNTRY_CODE=UAE;
Then rewrite to example.com --> example.com/en-ae
If $http_cookie --> COUNTRY_CODE=KW;
Then rewrite to example.com --> example.com/en-kw
If there is no preference saved on cookie, then route to default "example.com"
Http_cookie parameter also holds other detail such as _cfduid, COUNTRY_CODE_PREV, CURRENYCY_CODE , EXCHANGE_RATE
What should be the best approach to handle this requirement?
Anyone help me on this, thanks!
I would create a map to handle construct the redirect URLs.
http://nginx.org/en/docs/http/ngx_http_map_module.html#map
This will set the rewrite url to a variable $new_uri. The default, if no cookie value is present, will be /en-en/. Now you can create a rewrite rule.
rewrite ^(.*)$ $new_uri permanent;
Here is an updated config example as requested.
map $cookie_user_country $new_uri {
default /en-en/;
UAE /en-ae/;
KW /en-kw/;
}
server {
listen 8080;
return 200 "$uri \n";
}
server {
listen 8081;
rewrite ^(.*)$ $new_uri permanent;
return 200 "$cookie_user_country \n";
}
Use the $cookie_NAME directive to get the right value of a single cookie. The $http_VAR contains the value of a specific HTTP request header.
See my curl request for more details.
[root#localhost conf.d]# curl -v --cookie "user_country=KW; test=id; abcc=def" localhost:8081
* About to connect() to localhost port 8081 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:8081
> Accept: */*
> Cookie: user_country=KW; test=id; abcc=def
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.17.6
< Date: Sun, 26 Apr 2020 12:34:15 GMT
< Content-Type: text/html
< Content-Length: 169
< Location: http://localhost:8081/en-kw/
< Connection: keep-alive
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.17.6</center>
</body>
</html>
* Connection #0 to host localhost left intact
Checking current running NGINX Binary contains map module
type
strings `which nginx` | grep ngx_http_map_module | head -1
This will list all "printable" strings from the nginx binary and grep the output by "ngx_http_map_module". The result should look like this:
[root#localhost conf.d]# strings `which nginx` | grep ngx_http_map_module | head -1
--> ngx_http_map_module
If the output is eq to ngx_http_map_module the current running NGINX binary was compiled with map support. If not -> make sure you are using a NGX Binary compiled with map support.

HTTP request - Moved permanently?

I am trying to take some theoretical study about HTTP into practice. So I tried to make a HEAD (also tried GET but prefer HEAD since I am interested in the actual object) and it went as follows:
~$ telnet youtube.com 80
Trying 216.58.211.110...
Connected to youtube.com.
Escape character is '^]'.
HEAD /watch?v=GJvGf_ifiKw HTTP/1.1
Host: youtube.com
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://youtube.com/watch?v=GJvGf_ifiKw
Date: Thu, 12 Dec 2019 15:48:41 GMT
Content-Type: text/html
Server: YouTube Frontend Proxy
X-XSS-Protection: 0
As you can see, I am requesting the object locating at /watch?v=GJvGf_ifiKw on the host located at youtube.com and this must sum to youtube.com/watch?v=GJvGf_ifiKw which is the URL of the location header field. What's going on here? Why does it say it has moved to the identical location?
If you looked closely to the output you will find that you've been redirected to HTTPS as your initial request was telnet on port 80 which is the default HTTP port
and since they are enforcing redirection to HTTPS
so the it's redirected to identical location BUT over Secured HTTP with is HTTPS

Squid DNS FAIL when trying to connect to localhost

I'm running a local http server and local squid instance. A local http client opens a socket connecting to the squid instance, which seems to work. I then try to tunnel to the local http server by issuing the following http request:
CONNECT localhost:80 HTTP/1.1\r\n
which yields the response headers
Content-Language en
Content-Length 3612
Content-Type text/html;charset=utf-8
Date Thu, 21 Jun 2018 17:28:10 GMT
Mime-Version 1.0
Server squid/3.5.27
Vary Accept-Language
X-Squid-Error ERR_DNS_FAIL 0
with status 503.
I also tried connecting to 127.0.0.1, which yields this response:
Content-Language en
Content-Length 3433
Content-Type text/html;charset=utf-8
Date Thu, 21 Jun 2018 17:35:16 GMT
Mime-Version 1.0
Server squid/3.5.27
Vary Accept-Language
X-Squid-Error ERR_CONNECT_FAIL 111
My squid.conf looks like this:
http_port 3128
coredump_dir /var/spool/squid
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 1025-65535 # unregistered ports
acl CONNECT method CONNECT
acl any_host src all
acl all_dst dst all
http_access allow any_host
http_access allow all_dst
Is there a different way to tell squid to connect to localhost?
I found that what was failing what the localhost resolving to [::1] and not 127.0.0.1.
In order to bypass the /etc/hosts/ simply add the following to /etc/squid/hosts:
127.0.0.1 localhost
Then hosts_file /etc/squid/hosts in your squid.conf.
Of course the file can be put anywhere you would like.
Somehow squid tried to resolve localhost to 127.0.0.1, which ended up in a connection failure. Specifying [::1] instead of localhost, however, performs as expected.
In my case I was using the squid machine hostname (e.g. mysquid.proxy) and the problem was not related to the DNS resolutions because the squid machine could resolve itself correctly using its hostname.
The problem was rather caused by the configuration of an additional port in the same proxy. I was using squid as both forward proxy and reverse proxy with two different ports:
3128 - forward proxy
443 - reverse proxy
The client was connecting to the (forward) proxy mysquid.proxy:3128 and the request was something like:
CONNECT mysquid.proxy:443 HTTP/1.1
So the the reverse proxy port was used at the end.
However on that port it was configured a url_rewrite_program (a Perl script) to filter and change some path of specific url and such script was wrongly redirecting the request to a non-existant url which caused the error "503 Service Unavailable" in the client.

Resources