Wordpress on AWS ELB errors 302 - wordpress

I am in the process of moving my EC2 web hosting environment to ELB. Static webpages work perfectly, but Wordpress sites (multisite) loops with 302.
Apache log reports that "GET /" but the hosting folder for Wordpress is "GET /wp/".
See curl:
curl -v -k -H "Host: example.com" myELB.eu-west-1.elb.amazonaws.com/
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 301
< date: Wed, 03 Jun 2020 09:13:12 GMT
< content-type: text/html; charset=UTF-8
< content-length: 0
< location: https://example.com/
< server: Apache/2.4.29 (Ubuntu)
< x-redirect-by: WordPress
<
* Connection #0 to host myELB.eu-west-1.elb.amazonaws.com/ left intact
* Closing connection 0
Any suggestions?

Turns out ELB communicates via port 80 to EC2. All I had to do was disable "Force SSL" on Wordpress and it worked (in my case it was a plugin).

Related

nginx: behavior of Expect: 100-continue with HTTP redirect

I've been facing some issues with nginx and PUT redirects:
Let's say I have an HTTP service sitting behind an nginx server (assume HTTP 1.1)
The client does a PUT /my/api with Expect: 100-continue.
My service is not sending a 100-continue, but sends a 307 redirect instead, to another endpoint (in this case, S3).
However, nginx is for some unknown reason sending a 100-continue prior to serving the redirect - the client proceeds to upload the whole body to nginx before the redirect is served. This causes the client to effectively transfer the body twice - which isn't great for multi-gigabyte uploads
I am wondering if there is a way to:
Prevent nginx to send 100-continue unless the service actually does send that.
Allow requests with arbitrarily large Content-Length without having to set client_max_body_size to a large value (to avoid 413 Entity too large).
Since my service is sending redirects only and never sending 100-Continue, the request body is never supposed to reach nginx. Having to set client_max_body_size and waiting for nginx to buffer the whole body just to serve a redirect is quite suboptimal.
I've been able to do that with Apache, but not with nginx. Apache used to have the same behavior before this got fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 - wondering if nginx has the same issue
Any pointers appreciated :)
EDIT 1: Here's a sample setup to reproduce the issue:
An nginx listening on port 80, forwarding to localhost on port 9999
A simple HTTP server listening on port 9999, that always returns redirects on PUTs
nginx.conf
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
server {
listen 80;
server_name frontend;
keepalive_timeout 75s;
keepalive_requests 100;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9999/;
}
}
}
I'm running the above with
docker run --rm --name nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.21.1
Simple python3 HTTP server.
#!/usr/bin/env python3
import sys
from http.server import HTTPServer, BaseHTTPRequestHandler
class Redirect(BaseHTTPRequestHandler):
def do_PUT(self):
self.send_response(307)
self.send_header('Location', 'https://s3.amazonaws.com/test')
self.end_headers()
HTTPServer(("", 9999), Redirect).serve_forever()
Test results:
Uploading directly to the python server works as expected. The python server does not send a 100-continue on PUTs - it will directly send a 307 redirect before seeing the body.
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:9999/test
> PUT /test HTTP/1.1
> Host: 127.0.0.1:9999
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 307 Temporary Redirect
< Server: BaseHTTP/0.6 Python/3.9.2
< Date: Thu, 15 Jul 2021 10:16:44 GMT
< Location: https://s3.amazonaws.com/test
<
* Closing connection 0
* Issue another request to this URL: 'https://s3.amazonaws.com/test'
* Trying 52.216.129.157:443...
* Connected to s3.amazonaws.com (52.216.129.157) port 443 (#1)
> PUT /test HTTP/1.0
> Host: s3.amazonaws.com
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
>
Doing the same thing through nginx fails with 413 Entity too large - even though the body should not go through nginx.
After adding client_max_body_size 1G; to the config, the result is different, except nginx tries to buffer the whole body:
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:80/test
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> PUT /test HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
} [65536 bytes data]
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Thu, 15 Jul 2021 10:22:08 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
{ [157 bytes data]
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
Notice how nginx sends a HTTP/1.1 100 Continue
With this simple python server, the request subsequently fails because the python server closes the connection right after serving the redirect, which causes nginx to serve the 502 due to a broken pipe:
127.0.0.1 - - [15/Jul/2021:10:22:08 +0000] "PUT /test HTTP/1.1" 502 182 "-" "curl/7.74.0"
2021/07/15 10:22:08 [error] 31#31: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: frontend, request: "PUT /test HTTP/1.1", upstream: "http://127.0.0.1:9999/test", host: "127.0.0.1"
So as far as I can see, this seems exactly like the following Apache issue https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 (which is now addressed in newer versions). I am not sure how to circumvent this with nginx

301 Redirect pointing to a different location

I can browse the website https://builtsearch.com.au through browser correctly but when I use cUrl command and use googlebot as the agent I get this.
curl -A 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)' -i https://builtsearch.com.au/
HTTP/1.1 301 Moved Permanently
Date: Wed, 03 Mar 2021 02:24:42 GMT
Server: Apache
X-Powered-By: PHP/7.3.18
X-Frame-Options: SAMEORIGIN
Location: http://www.micaze.com/kategori/izmir-escort/
Content-Length: 0
Content-Type: text/html; charset=UTF-8**
This wrong 301 redirection is stopping googlebot from indexing the website.
The website is using Wordpress.
Any thoughts?
Thanks Tony McCreath. It turns out that my index.php was updated by some malicious plug-in. Sanitizing the index.php seems to resolve the issue.

Nginx running ip access in browser work but dns timeout

I'm running into an issue that I can't solve myself...
I'm running a Debian 10 server with nginx freshly installed on it.
IPV4: 149.56.45.129, DNS: yocha.app
Result of hostnamectl:
Static hostname: yocha.app
Icon name: computer-vm
Chassis: vm
Machine ID: d72735cff36a41f0a5326f0bb7eb1778
Boot ID: 72dd9022a4894eeea82bc74480543823
Virtualization: kvm
Operating System: Debian GNU/Linux 10 (buster)
Kernel: Linux 4.19.0-13-cloud-amd64
Architecture: x86-64
My /etc/hosts:
127.0.0.1 localhost
149.56.45.129 yocha.app
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
My nginx sites-avaible/default:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name yocha.app;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
When I access my ip address in the browser, I do get the nginx welcome message which is good I guess.
BUT when I try to access the dns the request timed out with no return...
I can log with ssh on my dns, I can ping it with no problems I even can curl it but when It comes to access it on a Browser, nothing happens.
curl -I http://149.56.45.129:80
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 21 Jan 2021 13:40:16 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 21 Jan 2021 13:05:20 GMT
Connection: keep-alive
ETag: "60097c10-264"
Accept-Ranges: bytes
me#yocha:~$ curl -I http://yocha.app:80
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 21 Jan 2021 13:40:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 21 Jan 2021 13:05:20 GMT
Connection: keep-alive
ETag: "60097c10-264"
Accept-Ranges: bytes
http://yocha.app on a browser
Anyone having a clue for me ?
Thanks a lot in advance !
Your site is redirecting to https:
$ curl -v http://yocha.app
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.14.2
< Date: Fri, 29 Jan 2021 20:21:46 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://yocha.app/
and port 443 is not open or it's blocked:
$ telnet yocha.app 443
Trying 149.56.45.129...
telnet: Unable to connect to remote host: Connection timed out
DNS is fine: check your firewall or make sure nginx is propertly configured to listen on port 443 and to serve an ssl certificate.

Nginx removes Content-Length header when acting as reverse proxy behind a WAF

I have Nginx 1.16.1 as a reverse proxy for JFrog Artifactory and they are reachable from the external networks via web application firewall. I am trying to get docker client working with this setup. It sends a HEAD request and awaits a Content-Length to check for the existence of a layer. Now I see that Content-Length is not included in the response received by the client. I can examine it by sending the same request using curl that sends docker:
$ curl -H 'User-Agent: docker/19.03.13 go/go1.13.15 git-commit/4484c46d9d kernel/4.19.128-microsoft-standard os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.13 \(linux\))' \
-H "Authorization: Bearer ${TOKEN}" \
-H 'Connection: close' \
-I \
"https://${ARTIFACTORY_URL}/v2/${IMAGE}/blobs/${DIGEST}"
HTTP/1.1 200 OK
Date: Mon, 09 Nov 2020 14:57:05 GMT
Server: Secure Entry Server
Content-Type: application/octet-stream
Docker-Content-Digest: sha256:[MASKED]
Docker-Distribution-Api-Version: registry/2.0
X-Artifactory-Id: [MASKED]
X-Artifactory-Node-Id: [MASKED]
Set-Cookie: SCDID_S=[MASKED]; path=/; Secure; HttpOnly
Connection: close
However, I see in the access log of Artifactory that it sets this response header. I used tcpdump to see what data is exchanged between Nginx and Artifactory:
HEAD /v2/[MASKED]/blobs/[MASKED] HTTP/1.1
X-JFrog-Override-Base-Url: https://[MASKED]:443
X-Forwarded-Port: 443
X-Forwarded-Proto: https
Host: [MASKED]
X-Forwarded-For: 10.10.40.14
Connection: close
ClientCorrelator: 0rIKeSpqZ9E$
RequestCorrelator: 7f0100-9099-2020.11.09_1457.05.275-001
HSP_CLIENT_ADDR: [MASKED]
Hsp-ListenerUri: https://[MASKED]
HSP_HTTPS_HOST: [MASKED]:443
Accept: */*
Authorization: Bearer [MASKED]
User-Agent: docker/19.03.13 go/go1.13.15 git-commit/4484c46d9d kernel/4.19.128-microsoft-standard os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.13 \(linux\))
HTTPS: on
SSLSessionID: 78ad360e9ea54f5efdb72ea223a63b6cbc7788ae9a1e876620e398040d06182c
SSLSessionTimeLeft: 3600
SSLSessionAge: 0
SSLCipher: ECDHE-RSA-AES128-GCM-SHA256
SSLCipherKeySize: 128
SSLProtocolVersion: TLSv1.2
Via: HTTP/1.1 Secure Entry Server
HTTP/1.1 200 OK
Content-Length: 2529
Content-Type: application/octet-stream
Date: Mon, 09 Nov 2020 14:57:05 GMT
Docker-Content-Digest: [MASKED]
Docker-Distribution-Api-Version: registry/2.0
Server: Artifactory/7.4.1 70401900
X-Artifactory-Id: 5a2dee84b6d80d2f:1f521881:17554c79de4:-8000
X-Artifactory-Node-Id: [MASKED]
Connection: close
The TrafficAnalyzer on the WAF shows that Content-Length in the incoming response from Artifactory is missing. Hence it must be Nginx responsible for removing it.
Now when I connect via VPN to get around the WAF the response looks okay:
Host: [MASKED]
User-Agent: docker/19.03.13 go/go1.13.15 ...
Authorization: Bearer [MASKED]
Connection: close
Date: Fri, 06 Nov 2020 17:13:58 GMT
Content-Type: application/octet-stream
Content-Length: 2529
Docker-Content-Digest: [MASKED]
Docker-Distribution-Api-Version:registry/2.0
Server: Artifactory/7.4.1 70401900
X-Artifactory-Id: 5a2dee84b6d80d2f:1f521881:17554c79de4:-8000
X-Artifactory-Node-Id: [MASKED]
Connection: close
But I also notice, that there are fewer headers set in the request. Is that some additional WAF-header that causes Nginx to remove Content-Length? I don't see anything related to this in Nginx debug log. Any thoughts?

HTTP headers return 404 on non-www URL?

I'm doing PHP get_headers() on an mp3 file on my server and receive HTTP 404 when using the non-www address and HTTP 200 when using www.
I can access the file from either address in the browser, so why the 404? Can I fix this somehow with .htaccess?
1) WordPress is configured to use the non-www address (example.com)
2) The files are in the wp-content/uploads area of the WordPress install
3) The www subdomain has a DNS CNAME pointing to the non-www domain (www.example.com -> example.com)
Headers for: http://lhcsj.org/wp-content/uploads/2012/05/2012-5-6-sj.mp3
HTTP/1.1 404 Not Found
Date: Tue, 08 May 2012 21:11:43 GMT
Server: Apache/2.2.3 (CentOS)
Content-Length: 314
Connection: close
Content-Type: text/html; charset=iso-8859-1
Headers for: http://www.lhcsj.org/wp-content/uploads/2012/05/2012-5-6-sj.mp3
HTTP/1.1 200 OK
Date: Tue, 08 May 2012 21:08:05 GMT
Server: Apache/2.2.3 (CentOS)
Last-Modified: Mon, 07 May 2012 17:19:47 GMT
ETag: "9c52430-e3626f-7a1332c0"
Accept-Ranges: bytes
Content-Length: 14901871
Connection: close
Content-Type: audio/mpeg
The fact that www.example.com and example.com point to the same IP address via a DNS CNAME entry doesn't mean that the server is configured to serve both. This server could be configured to handle a multitude of HTTP hosts and the default might not be www.example.com but something else. It would in fact be unsurprising behaviour that it returns a 404 status for a host for which it's not configured (not even a default host).
Check that there is a VirtualHost entry in your Apache Httpd configuration for each of www.example.com and example.com. The fact that Wordpress is configured for a particular host only comes into play after having passed that step.

Resources