How this IPs goes throught my htaccess? - wordpress

I have htaccess file like this:
<Limit GET HEAD POST>
order deny,allow
allow from xx.xx.xx.xx/xx
allow from xx.xx.xx.xx/xx
allow from xx.xx.xx.xx/xx
...doesn't matter....some ips...
deny from all
</Limit>
And this ip adressess in MYBB:
5.10.83.26
5.10.83.7
5.10.83.40
every day making my server overloaded, and than it is stucked, I have to wait for host to flush-hosts, I haven't permission...
How those ip adresses avoiding my restrictions in htaccess? -Yes, I am sure they are not allowed.
I use WordPress as root, and subfolder is MyBB. Those adresses I see in MyBB.
At least how can I add deny from 5.10.83.00/26 to htaccess and keep deny from all.

I can't tell you how there getting through your restrictions, but according to this whois result for 5.10.83.* the people to ask can be reached at 'abuse#softlayer.com'
Softlayer is a cloud platform and whoever is causing your DOS is one of there clients ... good luck

Related

allowing only your own ip address with nginx

I am managing a subdomain using nginx conf files. I am able to get a working subdomain up, and deny access to it (resulting in 403) by including deny all;. However, when I try to add allow 1.2.3.4; (not posting my real IP address) right above it (this is where I understand you have to put it to allow access to your own IP address), I am still getting 403 when I try to access the subdomain on my browser (in firefox private mode). I got my IP address through https://www.whatismyip.com/, and I am using the one given under "My Public IPv4 is: ". Is this the correct IP address I should be using? If not how should I go about finding the right IP address to allow?
Maybe this will help if you want to access your resource via nginx locally. You should put it in the root block of a subdomain.
allow 127.0.0.1;
deny all;

In WordPress how can I restrict access to xmlprc to specific ip/host

I have a WordPress site on my server. I'm developing an application, hosted on the same server, that integrate with WordPress through xml-rpc (in this moment I can't use Rest API) managing posts, taxonomies and so on.
How can I restrict access to xmlrpc service only to my application/server?
If you are running on Apache server you can just write a rule in the server configuration file to only allow access from your IP address, something like this:
# Deny the access of xmlrpc.php
<FilesMatch "xmlrpc\.php">
order deny,allow
Deny from all
Allow from 192.168.1.2
</FilesMatch>

How to whitelist IPs in .htaccess file? Is it compulsory to write ip list in bitween <Limit GET POST PUT> </Limit> directory?

I want to allow only list of ips that need to be whitelisted. Not want to blacklist or deny any other ips list in .htaccess file.
So please help me for how to whitelist only allow IPs in .htaccess file?
Is it compulsory to write ip list in between directory?
is it compulsory to add "order" statement, while i just want to set only allow ips list add. Not want to blacklist or deny any other ips list.

Blocking IP range in htaccess file

I'm managing a site and the site is built in Wordpress. It gets ENORMOUS amount of traffic from bots and we want to block all of them except for important bots like Google Yahoo Bing Baidu. We use cloudflare and I want to block them from two layers, Cloudflare firewall and htaccess file. In htaccess file, I know how to block a single IP address and last trailing IPs of a IP range like 123.123.123.0/16
However, I need to block following IPs
69.30.192.0 - 69.30.255.255
93.55.115.64 - 93.55.115.71
How do you set rules of this in htaccess file? Cloudflare seems to follow same rule.
You've almost got it. The /16 notation is actually called CIDR Notation.
The number indicates how many bits to match from left to right. The Wiki page explains it in depth.
Or... you can just take my word for it and use a tool like this one I found: http://www.ipaddressguide.com/cidr#range
You can then use the deny from in your .htaccess just as you would for a single ip with the given values:
Order Allow,Deny
Deny from 69.30.192.0/18
Deny from 93.55.115.64/29
Allow from all
Not sure how reliable the source is, but this is from clockwatchers
http://www.clockwatchers.com/htaccess_block.html
To Block a single ip address
order allow,deny
deny from 127.0.0.1
allow from all
This will refuse all GET and POST requests made by IP address 127.0.0.1, an error message is shown instead
To block multiple ip addresses, list them one per line
order allow,deny
deny from 127.0.0.1
deny from 127.0.0.2
deny from 127.0.0.3
allow from all
To block an entire ip range
deny from 127.0.0
This will refuse access for any user with an address in the 127.0.0.0 to 127.0.0.255 range.
Edit: Just found a similar question here
How to Block an IP address range using the .htaccess file
Looks like out answers are similar too.
The answer from #Nick is good, so on the side of configuring the .htaccess you should go his way.
My answer will be about another issue detected in your question: you are willing to block the IP range 69.30.192.0 - 69.30.255.255, but a quick search on the ARIN database (WHOIS for IP addresses) shows that this range is not belonging to a single person.
In fact, by doing this, you might potentially deny your website to non-bots.
Eg:
69.30.192.0 - 69.30.192.31 belongs to LEAKY****.COM
...
69.30.193.0 - 69.30.193.15 belongs to TA*****, Abdelkader
etc.

How to make an Squid-Cache internal URL rewrite?

url_rewrite_program by squid-cache really redirects the URL. In other words, the end-user gets a response back that says "redirectted page from foo to bar", and then the user makes another request to the redirected address. I don't want this.
what I want to achieve is similar to apache's mod_rewrite. I want an abselutely transparent rewriting mechanism. So that the user requests for a spefific content, and he gets it as a response (regardless of his initial requested URL) without any HTTP redirection.
the reason I want to avoid redirection via http is because I don't want the end-user to see internal application structures. For example, he requests "application1.foo.com", and he gets content of a URL that's much lengthier. So if the end-user bookmarks it, he bookmarks my clean little URL (application1.foo.com", this is good to keep users away from such details, it eventually gives them a uniform URL for the service even if I change it in the future. For example, I might map application1.foo.com to badprovider.com/path/to/file.php initially, and then change it to goodprovider.com/file.php and the user won't notice that. The advantage is, for example, end-user bookmarks would remain correct, as well as regulating their behaviour in a more guided manner.
Did you try setting squid as a reverse proxy with the 'accel' mode ? It worked for me:
acl all src all
http_port 3128 accel vhost
cache_peer 127.0.0.1 parent 80 0 no-query originserver name=myAccel
acl our_sites dstdomain your_domain.net
http_access allow our_sites
http_access deny all
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
negative_ttl 0
access_log /var/log/squid/access.log squid
hosts_file /etc/hosts
coredump_dir /var/spool/squid
your_domain.net is the domain you want to redirect

Resources