I'm setting up a test environment for our Symfony web site. I have a basic version working on my Windows machine for development, but trying to set up an AWS replica of the production web site as test causes all the valid pages to end up in an infinite 301 redirect. I'm guessing I've missed something in the configuration.
Symfony 2.8
AWS Ubuntu server
SSL enabled, and non-Symfony files served correctly
This is the raw response header for /app/dashboard:
HTTP/1.1 301 Moved Permanently
Cache-Control: no-cache
Content-Type: text/html; charset=UTF-8
Date: Mon, 06 Nov 2017 05:43:18 GMT
Location: https://****.***/app/dashboard
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.22
Content-Length: 424
Connection: keep-alive
The Apache, PHP and Symfony /app/config/parameters.yml configurations are identical to the Production and Dev environments, except for server addresses. Composer has been run to download all the project dependencies.
Production and Dev both work fine. It's only Test that has the infinite redirect loops.
I'm sure there's something simple I've overlooked but I can't find it.
UPDATE 8 Nov 2017
/app_dev.php works, but /app.php has the infinite loop.
GET /app.php (https)
Location: http://****.***/
GET / (http)
Location: http://****.***/app/dashboard
GET /app/dashboard (http)
Location: https://****.***/app/dashboard
GET /app/dashboard (https)
Location: https://****.***/app/dashboard
Unfortunately, the problem went away "by itself". I had tried a few things to debug it that didn't work, so I used rsync to reset all the files back to a pristine state, at which point the problem stopped happening.
UPDATE
I managed to break it again somehow. I then remembered I'd changed the trusted_proxies in the configuration, and that got overwritten when I rsync'd it. Adding the correct settings back in fixed the problem!
framework:
...
trusted_hosts: ~
trusted_proxies: [<correct settings go in here>]
...
Related
I'm trying to mirror the whole website at http://opposedforces.com/parts/impreza/en_g11/type_63/
Accessing through a browser (Firefox, w3m) or Postman work fine, and return the html file.
Accessing through wget, cURL, the Python requests module and HTTrack all fail.
wget specifically fails with:
↪ wget --mirror -p --convert-links "http://opposedforces.com/parts/impreza/en_g11/type_63/"
--2021-02-03 20:48:29-- http://opposedforces.com/parts/impreza/en_g11/type_63/
Resolving opposedforces.com (opposedforces.com)... 138.201.30.59Connecting to opposedforces.com (opposedforces.com)|138.201.30.59|:80... connected.
HTTP request sent, awaiting response... 0
2021-02-03 20:48:29 ERROR 0: (no description).
Converted links in 0 files in 0 seconds.
It seemingly returns no information. Originally I thought some JavaScript was generating the html, but I can't find any JS using Firefox developer tools, and I would assume Postman would not work in this case.
Any ideas how to get around this? Ideally I can use wget to download this and all sub-pages, but alternative solutions are also welcome.
This is one of those times when the website is completely and absolutely broken.
It is unfortunate that web browsers go to great lengths to support such broken web pages.
The problem is that the server sends a broken response. This is the response I see:
---response begin---
HTTP/1.1 000
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 44892
Expires: -1
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
Set-Cookie: ASP.NET_SessionId=gxhoir45jpd43545iujdpiru; path=/; HttpOnly
X-Powered-By: ASP.NET
Date: Fri, 05 Feb 2021 09:26:26 GMT
See? It returns a HTTP/1.1 000 response, which doesn't exist in the spec. Web browsers seem to just accept it as a 200 response and move on. Wget doesn't.
But you can get around it by using the --content-on-error option which is ask Wget to download the content irrespective of the response code
My server was working fine until I restarted the server and now my program with cURL API stops working. After troubleshooting for a long time, I figured out what the problem is.
When I use this command:
curl -i https://server.my-site.com/checkConnection
Nginx returns error:
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 04 Jul 2019 17:14:40 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Location: /checkConnection/
but if I use this command:
curl -i -L https://server.my-site.com/checkConnection
Then the server return:
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 04 Jul 2019 17:14:40 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Location: /checkConnection/
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 04 Jul 2019 17:14:40 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 2
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
ok
And if I use a browser, then everything works. I have no clue what the error comes from. and how to fix it.
Any help is appreciated!
This is what happens when the path maps to a directory. In theory, a URL like http://example.org/directory could map to a directory like /wherever/public_html/directory, and being a directory, show an index.html or similar file from there; however, that would cause surprising issues when you go to refer to other things like images in the same directory. <img src="picture.jpg"> would load http://example.org/picture.jpg rather than http://example.org/directory/picture.jpg since it's relative to the URL the browser is actually viewing. Because of this, HTTP servers generally issue a redirect to add a slash at the end, which then both loads the right page and at a URL where relative paths do what humans expect.
Adding -L to your curl commandline causes it to follow the redirect, as browsers do, and you get the result you were expecting. Without -L, curl is a more naive http client and lets you do what you will with the information.
Maybe you have a rule for www.server.my-site.com and that is why this is returning the 301 because it is redirecting from server.my-site.com to www site maybe you should share your configuration to check it
Ok. I finally fix it by adding an internal routing to uwsgi. Everything working fine now.
Title really says it all. I cannot get my domain to redirect from site.com to http://example.com but https://example.com works as well as https://www.example.com.
My nginx conf is as follows with sensitive paths removed. Here is a GIST link to all my nginx setup, it is currently the only enabled domain in my entire nginx configuration.
Console output on my local vs my server:
rublev#rublevs-MacBook-Pro ~
• curl -I rublev.io
^C
rublev#rublevs-MacBook-Pro ~
• ssh r
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-75-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
78 packages can be updated.
0 updates are security updates.
Last login: Fri May 12 16:41:35 2017 from 198.84.225.249
rublev#ubuntu-512mb-tor1-01:~$ curl -I rublev.io
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Fri, 12 May 2017 16:41:43 GMT
Content-Type: text/html
Content-Length: 339
Last-Modified: Thu, 20 Apr 2017 20:47:12 GMT
Connection: keep-alive
ETag: "58f91e50-153"
Accept-Ranges: bytes
I am at my wits end, I truly have no idea what to do now, I've spent weeks trying to get this working.
LetsEncrypt is using http by default, just to be safe, it's preferable to leave the ability for it to access the well-known for the acme challenge. You also don't seem to be pointing to the right path for it, unless you changed your webroot to /home/rublev/sites/rublev.io?
I would try to rewrite your default server like this instead of redirecting directly to your https equivalent. Besides, it will allow you to test more easily this strange behavior.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name rublev.io www.rublev.io;
location / {
return 301 https://$server_name$request_uri;
}
location ~ /.well-known {
# Apparently you changed your webroot,
# just make sure that the file is created and accessible
root /home/rublev/sites/rublev.io/.well-known;
}
}
Also, it's very important to know that you need either two different certificates or one that accept your domain both with and without www form. If that was not the case, that might very well be the reason of your issue. To generate a cert with both domains, you can run the following command:
sudo ./certbot-auto certonly --standalone -d rublev.io -d www.rublev.io --dry-run
I would also comment out your jenkins server and configs, just in case it's messing with your main one. Note: from this question, you can use proxy_redirect http:// $scheme://; instead of the form you're currently doing. It shouldn't affect another server, but I prefer to be sure in these weird scenarios.
Another thing, that might be the path to another solution, would be to consider picking only one form of your domain (either with or without www), and redirecting users from the "wrong" to the "right" one. This will allow you to get more consistent urls, better for SEO and people sharing your links.
Add the server
server {
server_name example.com;
return 301 https://example.com$request_uri;
}
We are transferring servers tonight, and we started the process. The new DNS has resolved and everything, but if you type in the site name localreviewengine.com. It kicks it to http//localreviewengine.com (so really, http://http//localreviewengine.com).
What would cause that? It's a wordpress install, and Sitename and Home in the DB are set properly, there are no redirects in CPanel, and the htaccess looks fine?
EDIT: It looks like localreviewengine.com/readme.html works though..?
Watching the response headers for this site brings a bit light into the dark:
~ # curl -I "http://localreviewengine.com/"
HTTP/1.1 301 Moved Permanently
Date: Sat, 01 Dec 2012 01:57:11 GMT
Server: Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.17
Location: http://http://localreviewengine.com
Content-Type: text/html
As you can see webserver is redirecting to http://http://localreviewengine.com
That's a typo / incorrect protocol prefix in your configuration.
check your apache config or (most likely) your document root .htaccess
Did you add any redirect / rewrites for the move process? e.g. we're moving our site, come back when the 5.000 machines are finally up and running again? (hopefully)
We have recently moved to a new web server (from IIS6 to IIS7.5) and I'm having some trouble updating our VSTO word addin.
Our app checks for updates manually when logging in and if a newer version has been found updates like this (let me know if there is a better way to do this - I've tried ApplicationDeployment.Update() but had no luck with it either!):
WebBrowser browser = new WebBrowser();
browser.Visible = false;
Uri setupLocation = new Uri("https://updatelocation.com/setup.exe");
browser.Url = setupLocation;
This used to launch the setup and update the app and when the user restarted word they would have the new version installed. Since the server move the update no longer happens. No exceptions are thrown. Browsing to the URL launches the updater as expected. What would I need to change to get this to work?
Note I have the following MIME types setup on the folder in IIS:
.application
application/x-ms-application
.manifest
application/x-ms-manifest
.deploy
application/octet-stream
.msu
application/octet-stream
.msp
application/octet-stream
.exe
application/octet-stream
Edit
OK I've had a look in fiddler and its returning a body size of -1:
If I enter the same URL in IE you can see that the setup.exe is launched without problems.
This is what fiddler displays in the raw view when accessing from word:
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Last-Modified: Tue, 27 Sep 2011 15:07:42 GMT
Accept-Ranges: bytes
ETag: "9bd0c334277dcc1:0"
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Mon, 14 Nov 2011 07:42:18 GMT
Content-Length: 735608
MZ��������������������#������������������������������������������ �!�L�!This program cannot be run in DOS mode. $�������
*** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. ***
Have you tried a tool like (for instance) fiddler2 to see what http traffic is actually created?
Does the client make a server call? What does the server actually return?
Then:
Make the calls from within word (which isn't working)
Make the calls by hand (which is working)
Compare both the request and response packages from those calls to spot the differences