We have a website deployed on AWS EC2 running on ubuntu,Apache, MYSQL. We have been getting continous requestes from below IP
"195.154.105.219"
"88.150.242.243". Requesting for xmlrpc.php file using POST method. As a result our website has become really slow and our clients work has been effected. As of now we have blocked these IP values by dropping them from iptables. We would like to know how to safegaurd our site from any future attacks like this.
The question is very general, and depending to your application's requirements, your budget and other factors, there are several techniques you can use, separately or together to mitigate DDOS and SPAM attacks.
Use Auto Scaling and an Elastic Load Balancer, to let AWS scale your infrastructure depending on traffic : http://aws.amazon.com/autoscaling/
Use S3 to serve static content. S3 is designed is scaling automatically for incoming traffic. All content served by S3 directly allows to offload your EC2 based web server : http://aws.amazon.com/s3/
Use CloudFront to distribute and server your content from AWS' edge location. This mitigates DDOS by distributing attackers' request to the network of edge locations instead of sending the traffic to your web server : http://aws.amazon.com/cloudfront/
All these three options have a cost associated, be sure to understand the pricing structure before deciding to implement any of these.
If you have a relatively short and stable list of IP addresses you want to block, you can customise either your EC2 instance's Security Group (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html) either your VPC Subnet ACL (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) to deny traffic from these IP addresses. This approach is not very scalable and, most of the time, you will play a mouse / cat game trying to catchup with whatever new addresses are used by your attackers
Last but not least, using plain old Apache configuration to block certain URL or restrict access to these by IP Addresses is very effective too (http://httpd.apache.org/docs/current/en/mod/mod_authz_core.html#require and File Directive)
Last but not least, I would encourage everyone to watch this re:invent talk about DDOS resiliency for AWS : https://www.youtube.com/watch?v=V7vTPlV8P3U)
Seb
xmlrpc.php is from wordpress. install the Disable xmlrpc pingback plugin, or better yet , in the wordpress site. .htaccess , deny xmlrpc.php file ;). that will fix it. Also checkup the wp-admin/scripts for any wierd script or just . find /var/www/ -type f -mtime -10 , to find the latest modified files.. check for any wierd php script..
Related
I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.
I develop and manage a small promotional/marketing website on Wordpress for a startup SaaS product. We're using Cloudflare for DNS and whatnot. Apparently the WAF has been turned on which uses a proxy and changes the user's IP address. i'm trying to use IP address to filter "internal" traffic for Google Analytics and the only way this works is with the WAF turned off. If not using the WAF is going to cause any sort of significant risk for my website, then obviously I'll need another way to do my analytics thing. Reading about what all it provides on their website doesn't make it all that clear to me how important it is for a website like this. If anyone who "gets it" had some insight to share, I'd be most appreciative. thx!
You should definitely use the WAF - it will protect your website from many malicious bots and attacks.
Wordpress sites are particularly juicy targets for attackers, for a number of reasons:
The security of a default Wordpress installation is not great.
Every Wordpress site shares common default features, such as the location of the admin login page, the admin username, and other exploitative resources.
Wordpress is extremely popular, and currently used by an estimated third of all websites on the internet.
Wordpress is used by many, many small businesses and hobbyists who do not how to secure their site properly.
Ergo, attackers can very easily scour the web for Wordpress websites that are easily hackable. Other nefarious activities are commonly carried out with ease on most Wordpress sites, such as comment spam or Denial of Service attacks.
What protection does the WAF offer?
Cloudflare and most other high quality WAFs can be configured to protect your site by automatically performing actions like:
Blocking known bad IP addresses.
Blocking bad bots which are automatically making requests to your site.
Limiting high numbers of requests from one source in a short amount of time (usually a sign of a DoS attack or scraping).
Blocking requests from particular countries or locations.
There is no reason why you wouldn't want to enable this protection if you have it available to you, and Cloudflare is the industry leader in this area.
Additionally, I would recommend you research how to better secure your Wordpress site in ways other than just the WAF - e.g. The Ultimate WordPress Security Guide
How to solve the IP address issue
Cloudflare is not changing the user's (the client) IP address, but rather acting as a proxy. As you have noticed, the IP address you're seeing is not the client's own, but one of Cloudflare's. This is crucial to how Cloudflare works to protect your site, but this is a common issue when using any kind of proxy.
To get the correct IP address when using a proxy, you need to check the X-FORWARDED-FOR header. You might see this as a string of comma-separated IP addresses, depending on how many proxies the user has gone through before reaching the site. The first one in the list is the original client IP.
e.g. Here 203.0.113.1 is the client's original IP address:
X-Forwarded-For: 203.0.113.1,198.51.100.101,198.51.100.102
Documentation: How does Cloudflare handle HTTP Request headers?
Anyway, it's good to use a function which can comprehensively check headers and give you the best match for the original client IP, regardless of whether the user is behind a proxy or not, so that you can guarantee it always works.
Here's a very popular StackOverflow question about this:
What is the most accurate way to retrieve a user's correct IP address in PHP?
Here is the situation:
I have a static website hosted by AWS S3(www.mysite.com), however, I want to also attach a blog to a sub-path with in my domain (www.mysite.com/blog) which uses wordpress on an EC2 instance.
How would I go about it?
Yes, this can be done, but you need to understand why the solution works the way it does.
A domain name points to a single logical endpoint, which handles all requests for the domain. If certain paths are handled by one system and other paths are handled by another one, then a single system must be in charge of routing those requests, and routing them through itself.
You cannot configure how paths are handled using DNS.
In my answer to Can I use CloudFront to serve a WordPress blog from the same domain, but a different server? (at Server Fault), I gave an overview of how this can be done in an AWS-centric solution. CloudFront, in addition to providing caching, is a reverse proxy, and that's fundamentally what you need, here: a reverse proxy to examine each request and select the correct back-end server. There are other solutions, such as using HAProxy in EC2 to handle request routing but they are more complex and may not perform as well in all cases.
You configure a CloudFront distribution with two origin servers (your bucket's web site endpoint and the Wordpress server), and use two cache behaviors so that /blog* goes to Wordpress and everything else goes to the bucket. Then you configure your domain name as an alternate domain name on the CloudFront distribution, and point your domain name to CloudFront in DNS.
The slightly tricky part here is that the wordpress site must be rooted at /blog and not at / because CloudFront will actually send the /blog (at the beginning of the path) to the WP machine, so it needs to expect that. CloudFront uses the path prefix to select the origin server (by finding the matching cache behavior for that path), but it can't remove the path prefix.¹
Wordpress is not my specialty, the this appears to require some WP configuration changes that appear to be fairly straightforward.
¹ CloudFront can't remove the path prefix. This will change when Lambda#Edge launches, which is an integration of CloudFront and Lambda functions, allowing server-side examination and manipulation of request and response headers from within the CloudFront infrastructure, but even when that is a available, it will still be advisable to configure WP to not be at the root, for simplicity.
By a different subdomain, I'm assuming that you want the blog to be at blog.mysite.com. I think that is the only way to go (you cannot use a /blog but will have to use a subdomain). In that case, following steps should work:
Create an Elastic IP and attach it to the EC2 instance
Configure the EC2 WP instance to respond to blog.mysite.com
In the DNS provider of www.mysite.com, point the A record of blog to point to the Elastic IP of the EC2 instance
I am currently building an web app which also utilizes websockets. (Rails for webserver and Nodejs for socket.io)
I have structured my application to use subdomains to separate between connection to the Nodejs server and the Rails webserver. I have "socket.mysite.com" redirected to the Node server and everything else to the webserver.
I am able to test this functionality on localhost. I simply modified my /etc/hosts to include the following:
127.0.0.1 socket.mysite.com
127.0.0.1 mysite.com
I know that on production I simply have to generate a CNAME record for socket.mysite.com and this will also work on my users' computers.
However, I am accustomed to testing my application by passing an IP address around. My team typically set up the server on our own machines and do development. When we want to test our individual servers, we just pass around an IP like "http://123.45.123.45".
With the new subdomain hack, this is no longer possible without modifying each of my tester's /etc/hosts. I honestly don't expect my testers to modify their /etc/hosts on the spot. What I can do is have each member of my team have their own domain and create the appropriate CNAME records for each individual team member.
Is there an easier way to allow me to run my app on an IP and just pass that IP around?
It sounds like your needs have scaled beyond the days of just simply editing a host file. While you could continue to have everyone on your team continue to edit host files, there are two main risks that I see here:
For your idea to just use IP Addresses, you risk missing something in testing that you wouldn't see unless you were on production, as the issue may be dependent on something in the domain configuration.
For using host entries, you introduce a lot of complexity and unnecessary changes to each developer and tester's configuration, which of course leaves the door open for mistakes, and it also takes time that will add-up over the long term.
Setting up a DNS server may be helpful in your case. You could map a set of domains for each developer that match a certain pattern so that your application will still run correctly. This would allow you to share the URLS without having to constantly reconfigure each person's computer. Additionally, marketing and sales stakeholders can easily view product demos as well, without needing to learn what the elusive host file is for.
If you have an IT department, they can help you setup the DNS. However, if you are a small team without a real IT department, some users have found success using DNS systems designed for home or small office networks.
I would like to know if it is possible using IIS and ASP.NET (and ideally something that might be employed on a shared hosting account, but this isn't required) to mimic WordPress.com's ability to allow end users to use their own domain names.
WordPress has users who own their own domains change the domain's DNS settings to point to WordPress's own DNS. My guess is this is not something that would be able to be done on a shared hosting account since it would involve adding an entry to the DNS server's table for each custom user domain.
However, for future reference, is this something that might be automated programmatically on perhaps a VPS?
My guess is this is not something that would be able to be done on a shared hosting account
You're nearly correct. The default site in IIS listens to all connections on port 80 for the default IP address.
You can add more sites in 3 ways:
Add new sites listening on different ports. This is not entirely practical if you want "ordinary" sites litening on port 80.
Add more IP addresses to the box (not too eaisly done) and set up new IIS sites to listen to the new IP addresses independently.
Add new sites to the server listening to different "host headers" (domain names to you and I) but on the same (default) IP address .
So called "Shared hosting" usually uses options 3, because a hosting company can get away with only using a single IP address for possibly hundreds of sites.
Therefore you would have to go through the tedious process of adding each host header to the box, and while I'm almost certian this could be done with Wscript, I'm no expert in that area.
If you really wanted to get into it, you could write an ISAPI module to intercept the calls and set up some clever (ish) database/hash table of domain names and target folders to server as the different sites.
Bottom line is, there are various ways to achieve this on Windows. Probably none quite as easy as on a *nix platform where everything is super-scriptable.
What we do is have a wildcard DNS entry set up for our domain. That way, whatever domain the user types will resolve to our website as long as it ends with ".mydomain.com". Then our .Net code just looks at the "HOST" header coming in and serves up the content that matches that domain name.