Iptables for Wordpress site - wordpress

Usually when we configure iptables rules for a website, we accept incoming connections to HTTP and HTTPS ports.
In the case of Wordpress, the CMS also makes HTTP and HTTPS connections to wordpress.org (for example, when you search for a plugin in the dashboard or you try to make an update of Wordpress)
HTTP/HTTPS connections are also needed when upgrading your system via apt-get or yum.
Since I am not comfortable just allowing all HTTP/HTTPS outgoing connections from a server, do you have any ideas on how to let you system or Wordpress make HTTP/HTTPS connections in a safer manner?
Regards,

You can make WordPress use a proxy for outgoing connections, if you don't mind running one somewhere, which would also allow you to filter the traffic to catch misbehaving plugins. You might run into trouble with older plugins that don't use the HTTP API, but I haven't met any in quite some time.

Related

JupyterLab does not work when redirected using TLS

I have a local jupyter lab instance, running on mint-2 computer with command jupyter lab --ip "*", and it listens to port 8888. I can access it just fine via the URL mint-2:8888.
I also have a server instance ubuntu-2. I reverse ssh tunnel from mint-2:8888 to ubuntu-2:8888, meaning I can access it on my mint-1 laptop just fine via the URL ubuntu-2:8888 anywhere in the world.
However, it is not encrypted with TLS, so I wanted to improve this. On ubuntu-2 I have an nginx load balancer container that strips https traffic, and redirects http traffic to other locations. I have set up jupyter.ubuntu-2:443 so that it redirects to ubuntu-2:8888 so that it redirects to mint-2:8888. This version initially seems to open up just fine, and I can navigate directories. However, whenever I want to launch a new terminal or notebook instance, or even create new directories, it wouldn't work. Here's the network log when I save a modified notebook:
My question is, why won't the requests go through, considering I can still interact with the interface just fine everywhere else, but just not when creating folders/notebooks/terminals. I am thinking that JupyterLab might be using UDP and I'm considering passing UDP traffic through nginx, but this doesn't really make sense, as this is clearly a PUT request. Any other help regarding where to find more logs or speculation on what might have gone wrong is much appreciated.
I dig into it a little more, and managed to figured it out.
JupyterLab has CORS policy that doesn't allow requests to ubuntu-2. I then added c.NotebookApp.allow_origin = "*" to JupyterLab's config at ~/.jupyter/jupyter_lab_config.py, as mentioned here.
Then I found out that everything is still not functional, and this is because Jupyter requires both HTTP and WebSocket protocols, and my current server setup only allows http traffic. So I need to enable generic TCP traffic on ubuntu-2's HAProxy load balancer. Because I have multiple virtual hosts on the server, I need to distinguish between them, so I used Server Name Indication, server name included in TLS traffic.

Wordpress not working after changing instance type on Google Cloud Platform

I changed my VM instance from "F1-micro" to "E2-micro". When I then restarted my machine, I couldn't access my webpage using the domain name, the webpage just shows an "Error 521" code - showing that my browser is working, CDN is working but the host has an erorr. When I paste the VMs IP address into my webpage, however, it show's the "Apache2 Debian Default Page".
Can somebody please help me with this?
The Error 521 message is caused by one of two situations:
First, check whether your WordPress site’s server is down. Even if everything else is configured properly, if your WordPress site’s server is offline, Cloudflare simply won’t be able to connect.
Second, your web server might be running fine but blocking Cloudflare’s requests. Because of how Cloudflare works, some server-side security solutions might inadvertently block Cloudflare’s IP addresses.
Cloudflare is a reverse proxy, all the traffic coming to your origin server will appear as if it’s coming from a small range of Cloudflare IPs (rather than each individual visitor’s unique IP address). Because of that, some security solutions will view high traffic from a limited number of IP addresses as an attack and block them.
Please check this link out in order to fix error 521 for Cloudflare and WordPress.
Turns out this problem was caused by my having installed the Debian Apache server package and it is causing collisions between it and the Apache shipped in the stack. Bitnami Stacks are completely self-contained and run independently of the rest of the software or libraries installed on your system.
So to fix this, all I had to do was run the following commands:
sudo systemctl stop apache2
sudo /opt/bitnami/ctlscript.sh restart

Wordpress get_template_directory_uri() behind load balancer

I have a Wordpress website running on an AWS EC2 instance. This is served through an AWS Elastic Load Balancer, which has HTTPS enabled with a certificate I got from Amazon.
The intention is to serve both an http and an https version of the website. Loading the http version works fine.
When I load the https version however, I'm getting mixed content errors because get_template_directory_uri() always returns http links. The way the load balancer works is the TLS terminates at the LB, and it communicates with the actual EC2 instance over port 80. therefore, there is no HTTPS on the instance itself.
A lot of this is beyond my skill to heal. I know just enough to have figured out what the problem seems to be, but I'm really not sure what the right way to fix it is.
Assuming I still want to serve both http and https versions of the page (there is no ecommerce or auth on the page -- it's just informational), how should I go about fixing this?
FYI, the EC2 instance is running on an Amazon ABI, which is basically RHEL.
So first off, you will find it difficult to run both an http and https WordPress version off the same database data because WordPress saves a lot of links as absolute links (i.e. with the http(s)://mydomain.com part) and a lot of plugins just don't bother adapting to the current protocol either.
Your best bet is going to be doing redirects through your htaccess file to redirect all http traffic to https.
That being said, one way you could do what you asked for is through a filter used by get_template_directory_uri:
add_filter('template_directory_uri', 'smart_template_directory_uri', 10, 3);
function smart_template_directory_uri($template_dir_uri, $template, $theme_root_uri) {
return preg_replace('/^https?\:/i', '//', $template_dir_uri); // replace "http://" or "https://" by "//", which browsers will automatically set to the current page's protocol
}
Hope this helps!

How to safegaurd AWS EC2 node based website from Spam and DDOS?

We have a website deployed on AWS EC2 running on ubuntu,Apache, MYSQL. We have been getting continous requestes from below IP
"195.154.105.219"
"88.150.242.243". Requesting for xmlrpc.php file using POST method. As a result our website has become really slow and our clients work has been effected. As of now we have blocked these IP values by dropping them from iptables. We would like to know how to safegaurd our site from any future attacks like this.
The question is very general, and depending to your application's requirements, your budget and other factors, there are several techniques you can use, separately or together to mitigate DDOS and SPAM attacks.
Use Auto Scaling and an Elastic Load Balancer, to let AWS scale your infrastructure depending on traffic : http://aws.amazon.com/autoscaling/
Use S3 to serve static content. S3 is designed is scaling automatically for incoming traffic. All content served by S3 directly allows to offload your EC2 based web server : http://aws.amazon.com/s3/
Use CloudFront to distribute and server your content from AWS' edge location. This mitigates DDOS by distributing attackers' request to the network of edge locations instead of sending the traffic to your web server : http://aws.amazon.com/cloudfront/
All these three options have a cost associated, be sure to understand the pricing structure before deciding to implement any of these.
If you have a relatively short and stable list of IP addresses you want to block, you can customise either your EC2 instance's Security Group (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html) either your VPC Subnet ACL (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) to deny traffic from these IP addresses. This approach is not very scalable and, most of the time, you will play a mouse / cat game trying to catchup with whatever new addresses are used by your attackers
Last but not least, using plain old Apache configuration to block certain URL or restrict access to these by IP Addresses is very effective too (http://httpd.apache.org/docs/current/en/mod/mod_authz_core.html#require and File Directive)
Last but not least, I would encourage everyone to watch this re:invent talk about DDOS resiliency for AWS : https://www.youtube.com/watch?v=V7vTPlV8P3U)
Seb
xmlrpc.php is from wordpress. install the Disable xmlrpc pingback plugin, or better yet , in the wordpress site. .htaccess , deny xmlrpc.php file ;). that will fix it. Also checkup the wp-admin/scripts for any wierd script or just . find /var/www/ -type f -mtime -10 , to find the latest modified files.. check for any wierd php script..

How to host many SNI certificates in Nginx

My company's product allows our users to custom brand by picking a personal subdomain. We handle this with a wildcard match in nginx and then let Rails decide what to do. We require SSL everywhere and have a wildcard SSL cert, so this all works beautifully.
Now we'd like to offer custom CNAMEs, with SSL, as an add-on feature. Since we don't really want to provision hundreds of IP addresses, we'll use SNI and accept the caveats. What's the best way to setup nginx with all of these certs? We could either allow users to upload their own cert, or we could buy them for the user. Either way, how do we make nginx see them and serve them without restarts and on a large scale? Can nginx read it's config dynamically from mysql somehow, read the certificate from a script, or pass the certificate responsibility to Rails? Ideas welcome!

Resources