Blocked connections from htaccess whitelist - wordpress

I have recently had a website attacked (DDOS?). My database connections were full and my server couldn't accept new request, resulting in my server (containing multiple websites) throwing constant errors. The IP and countries were always changing.
I whitelisted my htaccess for only certain countries, which is not a big deal because this customer works mostly in my country.
I would like to know if there is a way to know if there more attacks incoming. I was able to see the logs when they were accepted, but I can't see them anymore.
My website is built with Wordpress.
Thanks!

Related

What are the potential risks of not using a Web Application Firewall?

I develop and manage a small promotional/marketing website on Wordpress for a startup SaaS product. We're using Cloudflare for DNS and whatnot. Apparently the WAF has been turned on which uses a proxy and changes the user's IP address. i'm trying to use IP address to filter "internal" traffic for Google Analytics and the only way this works is with the WAF turned off. If not using the WAF is going to cause any sort of significant risk for my website, then obviously I'll need another way to do my analytics thing. Reading about what all it provides on their website doesn't make it all that clear to me how important it is for a website like this. If anyone who "gets it" had some insight to share, I'd be most appreciative. thx!
You should definitely use the WAF - it will protect your website from many malicious bots and attacks.
Wordpress sites are particularly juicy targets for attackers, for a number of reasons:
The security of a default Wordpress installation is not great.
Every Wordpress site shares common default features, such as the location of the admin login page, the admin username, and other exploitative resources.
Wordpress is extremely popular, and currently used by an estimated third of all websites on the internet.
Wordpress is used by many, many small businesses and hobbyists who do not how to secure their site properly.
Ergo, attackers can very easily scour the web for Wordpress websites that are easily hackable. Other nefarious activities are commonly carried out with ease on most Wordpress sites, such as comment spam or Denial of Service attacks.
What protection does the WAF offer?
Cloudflare and most other high quality WAFs can be configured to protect your site by automatically performing actions like:
Blocking known bad IP addresses.
Blocking bad bots which are automatically making requests to your site.
Limiting high numbers of requests from one source in a short amount of time (usually a sign of a DoS attack or scraping).
Blocking requests from particular countries or locations.
There is no reason why you wouldn't want to enable this protection if you have it available to you, and Cloudflare is the industry leader in this area.
Additionally, I would recommend you research how to better secure your Wordpress site in ways other than just the WAF - e.g. The Ultimate WordPress Security Guide
How to solve the IP address issue
Cloudflare is not changing the user's (the client) IP address, but rather acting as a proxy. As you have noticed, the IP address you're seeing is not the client's own, but one of Cloudflare's. This is crucial to how Cloudflare works to protect your site, but this is a common issue when using any kind of proxy.
To get the correct IP address when using a proxy, you need to check the X-FORWARDED-FOR header. You might see this as a string of comma-separated IP addresses, depending on how many proxies the user has gone through before reaching the site. The first one in the list is the original client IP.
e.g. Here 203.0.113.1 is the client's original IP address:
X-Forwarded-For: 203.0.113.1,198.51.100.101,198.51.100.102
Documentation: How does Cloudflare handle HTTP Request headers?
Anyway, it's good to use a function which can comprehensively check headers and give you the best match for the original client IP, regardless of whether the user is behind a proxy or not, so that you can guarantee it always works.
Here's a very popular StackOverflow question about this:
What is the most accurate way to retrieve a user's correct IP address in PHP?

What is the best solution to prevent malicious IPs from accessing my hosting server?

Just to explain my setup: I have a few websites hosted on a shared server (Lunarpages) and I use Google Apps (with modified MX records in Lunarpages) so the Google Apps emails work.
Now, I've noticed occationally that a mail script on one of my sites gets triggered without any content, though it includes IP information that the form collects. I looked up a couple of those IP address with AbuseIPDB, and they are known hacking IPs. So I want a good way to block all access to my server from known bad IPs.
I see in Cpanel in Lunarpages an option to turn on CloudFlare for security, and looking into them a little, it does appear that they block bad IPs. But I'm a little concerned about whether that would risk messing up how my site works or email works or how my analytics and email forms collect IP address information or if there would be anything different from me besides just turning it on and that the bad IPs would be blocked. I'm not looking to get myself in to a lot of troubleshooting.
Is CloudFlare a good solution, or are there other good alternatives?
Regarding the AbuseIPDB, they look like they have an API that I might be able to set up to block IPs, but if I understand right, I would have to modify all my sites and that still wouldn't block direct access to a lot of files. Unless I'm mistaken.
You can use ipset to block a list of IP addresses and you can set up ipset list of IP addresses from some spam DB.

Subdomains. How do you do development with subdomains?

I am currently building an web app which also utilizes websockets. (Rails for webserver and Nodejs for socket.io)
I have structured my application to use subdomains to separate between connection to the Nodejs server and the Rails webserver. I have "socket.mysite.com" redirected to the Node server and everything else to the webserver.
I am able to test this functionality on localhost. I simply modified my /etc/hosts to include the following:
127.0.0.1 socket.mysite.com
127.0.0.1 mysite.com
I know that on production I simply have to generate a CNAME record for socket.mysite.com and this will also work on my users' computers.
However, I am accustomed to testing my application by passing an IP address around. My team typically set up the server on our own machines and do development. When we want to test our individual servers, we just pass around an IP like "http://123.45.123.45".
With the new subdomain hack, this is no longer possible without modifying each of my tester's /etc/hosts. I honestly don't expect my testers to modify their /etc/hosts on the spot. What I can do is have each member of my team have their own domain and create the appropriate CNAME records for each individual team member.
Is there an easier way to allow me to run my app on an IP and just pass that IP around?
It sounds like your needs have scaled beyond the days of just simply editing a host file. While you could continue to have everyone on your team continue to edit host files, there are two main risks that I see here:
For your idea to just use IP Addresses, you risk missing something in testing that you wouldn't see unless you were on production, as the issue may be dependent on something in the domain configuration.
For using host entries, you introduce a lot of complexity and unnecessary changes to each developer and tester's configuration, which of course leaves the door open for mistakes, and it also takes time that will add-up over the long term.
Setting up a DNS server may be helpful in your case. You could map a set of domains for each developer that match a certain pattern so that your application will still run correctly. This would allow you to share the URLS without having to constantly reconfigure each person's computer. Additionally, marketing and sales stakeholders can easily view product demos as well, without needing to learn what the elusive host file is for.
If you have an IT department, they can help you setup the DNS. However, if you are a small team without a real IT department, some users have found success using DNS systems designed for home or small office networks.

Website accessible from everywhere except for client's network

My client has a website that is showing some strange behavior. The site is built in ASP.Net and used to be hosted on their internal network. It's now been moved to a different server outside their network. They have other sites hosted on the same server, some built using DotNetNuke, and some classic ASP. All these sites are hosted on one application server, with a database (SQL Server 2008) on a separate server (which is on the same network as the application server). They share the application server, and the database server.
Now that this site has been moved to the outside server, they can't access it. I can, and so can others that I work with (from different IPs, across the country). But the client can't from their network. They can access the landing page subsite.clientdomain.com (no db access), but nothing else. So, for instance, there's a link to subsite.clientdomain.com/folder. When they click that link, the URL changes to subsite.com/folder, which does not work. For myself and others not at the client site, the URL does not change and opens with no problems.
I didn't write the site, and didn't even know it existed before this problem cropped up, so I know very little more than this. Any help is appreciated.
I'm going to go with Martijn B's answer. There's a DNS issue on the internal network. Somewhere on of the DNS servers is a definition that maps http://companywebsite to an ip address like 192.168.1.20 or whatever.
I would open a command prompt on your PC and type
ping new_website_name.com
Take a look at the IP address that comes back. You can also do an nslookup on new_website_name.com that will give you more information. If you (person A) gets one IP address and Person B (inside the network) gets a different IP address....there is definitely a DNS issue on the internal network.
You're going to have to do some network tracing to determine exactly where any redirection is occurring. Given that the problem is only manifested in certain locations, it is likely that it is a function of network configuration in that location (as previously suggested). Without understanding exactly what redirection is occurring, it would be unwise to make configuration changes that might make the problem worse or introduce new issues.
A DNS server cannot AFAIK redirect to a different URL. So something is redirecting from subsite.clientdomain.com/folder to subsite.com/folder, which could be caused by a HTTP redirect. This can be triggered by the software/website itself or by IIS.

Redirecting http traffic to another server temporarily

Assume you have one box (dedicated server) that's on 24 7 and several other boxes that are user machines that have unused bandwidth. Assume you want to host several web pages. How can the dedicated server redirect http traffic to the user machines. It is desirable that the address field in the web browser still displays the right address, and not an ip. Ie. I don't want to redirect to another web page, I want to tell the web browser that it should request the same web page from a different server. I have been browsing through the 3xx codes, and I don't think they are made for anything like this.
It should work some what along these lines:
1. Dedicated server is online all the time.
2. User machine starts and tells the dedicated server that it's online.
(several other user machines can do similarly)
3. Web browser looks up domain name and finds out that it points to dedicated server.
4. Web browser requests page.
5. Dedicated server tells web browser to repeat request to user machine
Is it possible to use some kind of redirect, and preferably tell the browser to keep sending further requests to user machine. The user machine can close down at almost any point of time, but it is assumed that the user machine will wait for ongoing transactions to finish, no closing the server program in the middle of a get or something.
What you want is called a Proxy server or load balancer that would sit in front of your web server.
The web browser would always talk to the load balancer, and the load balancer would forward the request to one of several back-end servers. No redirect is needed on the client side, as the client always thinks it is just talking to the load balancer.
ETA:
Looking at your various comments and re-reading the question, I think I misunderstood what you wanted to do. I was thinking that all the machines serving content would be on the same network, but now I see that you are looking for something more like a p2p web server setup.
If that's the case, using DNS and HTTP 30x redirects would probably be what you need. It would probably look something like this:
Your "master" server would serve as an entry point for the app, and would have a well known host name, e.g. "www.myapp.com".
Whenever a new "user" machine came online, it would register itself with the master server and a the master server would create or update a DNS entry for that user machine, e.g. "user123.myapp.com".
If a request came to the master server for a given page, e.g. "www.myapp.com/index.htm", it would do a 302 redirect to one of the user machines based on whatever DNS entry it had created for that machine - e.g. redirect them to "user123.myapp.com/index.htm".
Some problems I see with this approach:
First, Once a user gets redirected to a user machine, if the user machine went offline it would seem like the app was dead. You could avoid this by having all the links on every page specifically point to "www.myapp.com" instead of using relative links, but then every single request has to be routed through the "master server" which would be relatively inefficient.
You could potentially solve this by changing the DNS entry for a user machine when it goes offline to point back to the master server, but that wouldn't work without an extremely short TTL.
Another issue you'll have is tracking sessions. You probably wouldn't be able to use sessions very effectively with this setup without a shared session state server of some sort accessible by all the user machines. Although cookies should still work.
In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).
and more interesting stuff in here
apart from load balancing you will need to set up more or less similar environment on the "users machines"
This sounds like 1 part proxy, 1 part load balancer, and about 100 parts disaster.
If I had to guess, I'd say you're trying to build some type of relatively anonymous torrent... But I may be wrong. If I'm right, HTTP is entirely the wrong protocol for something like this.
You could use dns, off the top of my head, you could setup a hostname for each machine that is going to serve users:
www in A xxx.xxx.xxx.xxx # ip address of machine 1
www in A xxx.xxx.xxx.xxx # ip address of machine 2
www in A xxx.xxx.xxx.xxx # ip address of machine 3
Then as others come online, you could add then to the dns entries:
www in A xxx.xxx.xxx.xxx # ip address of machine 4
Only problem is you'll have to lower the time to live (TTL) entry for each record down to make it smaller (I think the default is 86400 - 1 day)
If a machine does down, you'll have to remove the dns entry, though I do think this is the least intensive way of adding capacity to any website. Jeff Attwood has more info here: is round robin dns good enough?

Resources