I am new to wordpress, i have a situiation where i have to redirect an url with query params to a new url. can some one please help me out with it . i tried few things but none of it works
here is the url pattern
http://exampledomain.com/page/?p=job/123
i want it to redirect to
http://newdomain.com/jobs/123
This probably is what you are looking for:
RewriteEngine on
RewriteCond %{QUERY_STRING} ^|&p=job/(\d+)&|$
RewriteRule ^/?page/?&p /jobs/%1 [R=301]
RewriteRule ^/?jobs/(\d+)$) /page/?p=job/$1 [END]
In case you receive an internal server error (http status 500) using that rule chances are that you operate an old version of the apache http server. You will see a definite hint to an unsupported [END] flag in that case in the http server's error log file. Either upgrade the http server or use the older [L] flag, it probably will work the same here, though that depends a but on your setup.
That rule should work likewise in the http server's host configuration or in a dynamic configuration file (".htaccess" style file). You should prefer the first option, but if you really have to use a dynamic configuation file then take care that the interpretation of such files is enabled at all in the http server configuration and that the file is located in the host's DOCUMENT_ROOT folder.
And a general remark: you should always prefer to place such rules in the http servers host configuration instead of using dynamic configuration files (".htaccess"). Those dynamic configuration files add complexity, are often a cause of unexpected behavior, hard to debug and they really slow down the http server. They are only provided as a last option for situations where you do not have access to the real http servers host configuration (read: really cheap service providers) or for applications insisting on writing their own rules (which is an obvious security nightmare).
My WordPress client no longer wants SSL encryption. Currently, I have the following in .htaccess to force SSL encryption:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [R=301,L]
RewriteCond %{HTTPS} !on
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
Previous visitors' browsers will automatically try for HTTPS because of the 301 I believe. How can I move to HTTP (unsecure) without having previous visitors run into issues?
I can't think of any good reason why you would want to do this; especially in 2017. There are a number of factors against you:
You need to still keep a valid SSL cert in place in order to redirect from HTTPS back to HTTP. You would need to do this for all the inbound links to https, search engine indexes, bookmarks, etc. As mentioned in this other question, without a valid SSL cert in place, the user sees a browser warning before the request even reaches your site. (If you need to keep the SSL cert in place then why not use it properly?)
Any browser that has cached the HTTP to HTTPS 301 redirect will naturally be redirected to the HTTPS site. Without a valid SSL cert they will see a browser warning. With a valid SSL cert the user will be redirected back to HTTP (but this also depends on whether the page/resources are also cached). However, this can result in a (partial) redirect loop - depending on the browser, you might get a momentary warning (ERR_TOO_MANY_REDIRECTS) before the browser resolves the conflict. Some browsers may not resolve the conflict, so the user may be left looking at an error until they manually clear their browser cache.
To minimise this redirection issue, reduce all caching to a bare minimum and change any essential redirects to 302 (temporary) far in advance of moving back to HTTP. Neither of which is ideal.
Google Chrome currently warns users when they are entering username/password and/or payment information over an insecure (HTTP) connection. This will naturally include logging into WordPress. You get a "Not Secure" message in the browsers address bar. Google plan to extend this behaviour to Incognito mode (all sites) and eventually to everything. This will make it very difficult for any site to stay on plain old HTTP.
See the following related question on the Pro Webmasters stack:
Are there other options besides HTTPS for securing a website to avoid text input warnings in Chrome?
And Google's Security Blog post announcing the proposed changes:
Google Security Blog - Moving towards a more secure web -
September 8, 2016
With the introduction of free/automated CA's like Let's Encrypt it's not so much a money-thing these days if you simply want to enable encryption.
So, I think educating your client would be the better option.
I have a .htaccess file in the images.domainname.com subdomain. I have tried
order allow,deny
deny from "id address i want to block"
allow from all
but this didn't work...
Indeed, if the website is simply embedding the images with HTML, the IP address of the website won't show as the requester. Your server will get the IP address of the visitor on the copying website.
However, .htaccess rules can be used to check that the HTTP referrer doesn't come from badwebsite.com. A word of warning here – not all browsers send a HTTP referrer. Therefore, it is important to remember that the referrer might be blank.
The following will deny access to jpg, jpeg, png and gif files if the referrer is not empty and not from yourdomain.com. Replace yourdomain.com with your actual domain in the following code.
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?yourdomain.com [NC]
RewriteRule \.(jpg|jpeg|png|gif)$ - [NC,F,L]
One disadvantage of this: if people find an image from Google and click through, they will be blocked as well if their browser sends a referrer. This might not be a big deal but would require some testing on your part to determine whether or not you're happy with the behavior.
Alternatively, you could:
Contact the site owner about the issue. Give him a handful of days to respond.
Rename your images. This entails updating the references to the images in your HTML etc., so this may not be possible depending on the size of your site.
As a less serious remark: if the owner fails to reply after a sensible time, don't forget that you basically control a part of his website. ;)
You could use images with huge dimensions instead and possibly break the design or upload an image with a polite request to remove the embedded images. This is not recommendable in any professional context, of course, and I would really, really urge that you keep things clean; you don't want to be held responsible for displaying inappropriate content to unsuspecting visitors, ethically as well as legally.
We have a clients domain name (d1) pointing at one of our sites (s1) which is a .NET 4 web forms site.
The client has set up a subdomain on a different domain (d2) and pointed this at the s1 IP address.
We need to serve a specific page on s1 if the d2 domain is used and not have the page in the URL.
I would like to achieve this without a redirect if possible.
eg
example.com -> the site
x.example.net -> the site /thepage.aspx (but want the URL in the address bar to remain x.example.net, not x.example.net/thepage.aspx).
I've tried doing a Server.Transfer in begin request and while this worked, the postback didn't (I assume because it's because of the transfer but I don't know how to detect a postback in begin request and thus not transfer).
I thought there may be a way to leverage routing but there would be no path (just the domain name) so any route set up like this would presumably route all requests to this page if they don't get caught be a previous route - not ideal).
So, in short:
Is there a way to detect a postback in Application_BeginRequest in global.asax so I only transfer the inital request?
Or is there a way of mapping a domain name to a page without redirecting?
Is there some feature I don't know about that achieves this?
You can set up Rewrite Rules to do this. The following rule rewrites the root Url to /thepage.aspx only if the host matches x.example.net.
RewriteCond %{HTTP_HOST} (^x.example.net$)
RewriteRule ^$ / [NC,L]
If you have IIS7: you can do this using URL Rewrite.
If you have IIS6: you can set up ISAPI Rewrite on the server.
Depending on your setup, the 2nd line may include a slash:
RewriteRule ^/$ /thepage.aspx [NC,L]
You could write a HttpModule which examines incoming requests - if a request is for x.domain2.com, then you could invoke thepage.aspx like this:
Type page_type = BuildManager.GetCompiledType ("~/thepage.aspx");
Page page = (Page) Activator.CreateInstance (page_type);
page.ProcessRequest (Context);
OK, here is the 7th day of unsuccessfull attempt to find an answer why 401 error appears...
Now,
.htaccess in the root folder contains the only 3 strings (was simplified) and there are NO more .htaccess files in the project:
RewriteEngine On
RewriteCond %{HTTPS} !on
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
So, it redirects all requests to be https. It works fine for any urls, even for /administration directory.
So,
http://mydomain.com
becomes
https://mydomain.com
If https://mydomain.com was entered, there are no redirections.
http://mydomain.com/administration/index.php
becomes
https://mydomain.com/administration/index.php
If https://mydomain.com/administration/index.php was entered, there are no redirections.
That's clear, and the problem is below.
I want /administration directory to be password protected. My Shared Hosting Control Panel allows to protect directories without manual creating of .htaccess and .htpasswd (you choose a directory to protect, create username and password, and .htaccess and .htpasswd are created automatically). So, .htaccess appears in the /administration folder. .htpasswd appears somewhere else, the path to .htpasswd is correct, and everything looks correct (it works the same way as to create it manually). So, there are 2 .htaccess files in the project, one in the root directory and one in the /administration directory (with .htpasswd at the directory .htaccess knows where it is).
Once the password is created,
the results are:
You enter:
https://mydomain.com/administration/index.php
Then it asks to enter a password.
If you enter it correctly,
https://mydomain.com/administration/index.php is displayed.
The result: works perfect.
But, if you enter
http://mydomain.com/administration/index.php (yes, http, without S)
then instead of redirecting to the same,but https page,
it redirects to
https://mydomain.com/401.shtml (starts with httpS)
by unknown reason and even does NOT ask a password. Why?
I've contacted a customer support regarding this question and they are sure the problem is in .htaccess file, and they do not fix .htaccess files (that's clear, they do not, I don't mind).
Why does this happen?
Did I forget to put some flags, or some options to change default settings in the .htaccess file?
P.S.Creating .htaccess and .htpasswd manually (not from hosting Control Panel) for the folder /administration causes the same 401 error in case if not https, but http was entered.
And the problem appears with URLs to /administration directory only.
Thank you.
Try using this instead. Not the L and R flag.
RewriteEngine On
RewriteCond %{HTTPS} !on
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
Also clear your browsers cache first, to remove the old incorrect redirect.
If that doesn't work try using this.
RewriteCond %{HTTPS} !on
RewriteCond %{THE_REQUEST} ^(GET|HEAD)\ ([^\ ]+)
RewriteRule ^ https://%{HTTP_HOST}%2 [L,R=301]
I feel a bit bad about writing it, as it seems kind of hackish in my view.
EDIT
Seems the 2nd option fixed the problem. So here is the explanation as to why it works.
The authentication module is executed before the rewrite module. Because the username and password is not send when first requesting the page, the authentication module internally 'rewrites' the request url to the 401 page's url. After this mod_rewrite comes and %{THE_REQUEST} now contains 401.shtml instead of the original url. So the resulting redirect contains the 401.shtml, and not the url you want.
The get to the original (not 'rewritten') url, you need to extract it from %{THE_REQUEST}. THE_REQUEST is in the form [requestmethod] [url] HTTP[versionnumber]. The RewriteCond extracts just the middle part ([url]).
For completeness I added the [L,R=301] flags to the second solution.
I think I found an even better solution to this!
Just add this to your .htaccess
ErrorDocument 401 "Unauthorized"
Solution found at:
http://forum.kohanaframework.org/discussion/8934/solved-for-reall-this-time-p-htaccess-folder-password-protection/
-- EDIT
I eventually found the root cause of the issue was ModSecurity flagging my POST data (script and iframe tags cause issues). It would try to return a 401/403 but couldn't find the default error document because ModSecurity had made my htaccess go haywire.
Using ErrorDocument 401 "Unauthorized" bypassed the missing error document problem but did nothing to address the root cause.
For this I ended up using javascript to add 'salt' to anything which was neither whitespace nor a word character...
$("form").submit(function(event) {
$("textarea,[type=text]").each(function() {
$(this).val($(this).val().replace(/([^\s\w])/g, "foobar$1salt"));
});
});
then PHP to strip the salt again...
function stripSalt($value) {
if (is_array($value)) $value = array_map('stripSalt', $value);
else $value = preg_replace("/(?:foobar)+(.)(?:salt)+/", "$1", $value);
return $value;
}
$_POST = stripSalt($_POST);
Very, Very, Very Important Note:
Do not use "foobar$1salt" otherwise this post has just shown hackers how to bypass your ModSecurity!
Regex Notes:
I thought it may be worth mentioning what's going on here...
(?:foobar)+ = match first half of salt one or more times but don't store this as a matched group;
(.) = match any character and store this as the first and only group (accessible via $1);
(?:salt)+ = match second half of salt one or more times but don't store this as a matched group.
It's important to match the salt multiple times per character because if you've hit submit and then you use the back button you will go back to the form with all the salt still in there. Hit submit again and more salt gets added. This can happen again and again until you end up with something like:
foobarfoobarfoobarfoobar>saltsaltsaltsalt
I was not satisfied with the solutions above so I came up with another one:
In a modern web server configuration we should redirect all traffic to HTTPS so the user can not reach any content without HTTPS. After the user's browsing our content with HTTPS we can use authentication. With this in mind we can wrap the authentication directive in an If directive:
<If "%{HTTPS} == 'on'">
AuthType Basic
...
</If>
You can leave and use Rewrite directives as you like.
With this solution:
you must not change ErrorDocument as suggested by Hoogs
you must not extract path from THE_REQUEST in a hackish way as suggested by Gerben
This is the type of thing is that is a bit tricky to troubleshoot on Apache without the box right in front of you, but I what I think is happening is that your rewrite directive is being processed after path resolution, and it's the path resolution that has the password requirement.
Backing up a bit, the way a URL is resolved in Apache is that the request comes in and gets handed from module to module, kind of like a bucket brigade. Each module does its own thing....some modules do content negotiation, some translate URLs to file paths, some check authentication, one of them is mod_rewrite ...
One place where you see this in the configuration is actually that there is both a Location directive and a Directory directive which seem the same in most respects, but they are different because Locations talk about URLs and Directories talk about filesystem paths.
Anyhow, my guess is that going down the bucket brigade, Apache figures out that it needs a password to access that content before it figures out that it needs to redirect to HTTPS. (mod_rewrite is kind of a crazy module and it can mess with all kinds of things in surprising ways..it can do path translation, bits and pieces of rewrite, make subrequests, and a bunch of other nutty things).
There are few ways you can fix this that I can think of.
Change your directory root in the vhosts container for the http site so that it can't find the passworded file (this would be my approach)
Change your module load order so that mod_rewrite happens earlier in the chain (may have unexpected consequences)
Use setenvif
That last one needs more explanation. Remember the bucket brigade I told you about? Apache modules can also set environment variables, which are completely outside of the module->module->module->chain. You could, perhaps, set an environment variable if the site is not HTTPS. Then however you set up your access control could use the SetEnvIf directive to always allow access to the resource if it's set, BUT you have to make sure for sure that you're going to hit that rewrite rule.
As I said, my choice would #1 but sometimes people need to do crazy things, and Apache will let you.
My real-world SOP for https:// sites these days is that I just shoot all of my port 80 content over to a single vhost that can't serve any content at all. Then i mod_rewrite everything over https://... badda bing, badda boom, no http and no convoluted security risks.