I have purchased VPS server from OVH. I have installed Vestacp, it has been more than 6 months and I'm still facing issues with server security. Sometimes my Wordpress websites get hacked, sometimes the server is slow or not responding for a whole day. I'm not able to identify issue. Someone please. help me.
Here is a basic checklist to get you started:
Download and run WPScan against your site you can obtain this here.
Change all your passwords, Since it's a virtual private server your pem file might of been compromised. So change your password for all access to the site.
Update all your plugins, I can't stress enough and I see businesses do this all the time, they don't update their plugins. Make sure you are updated to the latest wordpress version as well.
If you website is beyond repair at this time download all the files and then do a fresh install of Wordpress and restore what you can.
Invest in a SSL Certificate to encrypt your data, this will protect you and your users from MITM (man in the middle) attacks.
Update your .htaccess file with restrictions try these.
If you don't have an IDS/IPS to detect SQL injection consider installing ModSecurity, you can download that here.
Since it's a virtual private server if a backdoor has been planted you might want to consider as well doing a full wipe and restore of files you know are secure.
Close ports you don't need. If you don't use certain ports all the time close them.
Update the webserver applications, apache, mysql, and others. If you don't have the latest version you should be able to download them manually and if it's Linux just compile and run the latest source.
For all the countries that don't mean anything to your business block all of them with a country blocking plugin but make sure it's secure, the key is to do your research.
Install something like WPSecurity and limit the amount of failed logins before being locked out or having the ip address blocked for certain usernames after so many failed attempts.
If it's a Linux VPS try these commands to see what your server is up to:
#Check for remote connections
netstat -a
#Allows to monitor network usage by application
nethogs eth0
#Allows to monitor the system log for authorizations
tail -f /var/log/auth.log
#Allows to monitor firewall log
tail -f /var/log/ufw.log
#Allows to monitor packets (look for malformed ones)
tshark -i eth0
You should be doing incident response at this time more than anything since it's a VPS server. There are some great methodologies on this website that may help as well.
Hope this helps.
--lillypad
Related
I changed my VM instance from "F1-micro" to "E2-micro". When I then restarted my machine, I couldn't access my webpage using the domain name, the webpage just shows an "Error 521" code - showing that my browser is working, CDN is working but the host has an erorr. When I paste the VMs IP address into my webpage, however, it show's the "Apache2 Debian Default Page".
Can somebody please help me with this?
The Error 521 message is caused by one of two situations:
First, check whether your WordPress site’s server is down. Even if everything else is configured properly, if your WordPress site’s server is offline, Cloudflare simply won’t be able to connect.
Second, your web server might be running fine but blocking Cloudflare’s requests. Because of how Cloudflare works, some server-side security solutions might inadvertently block Cloudflare’s IP addresses.
Cloudflare is a reverse proxy, all the traffic coming to your origin server will appear as if it’s coming from a small range of Cloudflare IPs (rather than each individual visitor’s unique IP address). Because of that, some security solutions will view high traffic from a limited number of IP addresses as an attack and block them.
Please check this link out in order to fix error 521 for Cloudflare and WordPress.
Turns out this problem was caused by my having installed the Debian Apache server package and it is causing collisions between it and the Apache shipped in the stack. Bitnami Stacks are completely self-contained and run independently of the rest of the software or libraries installed on your system.
So to fix this, all I had to do was run the following commands:
sudo systemctl stop apache2
sudo /opt/bitnami/ctlscript.sh restart
I wish to make a secure environment and to block uploading to any destination on the Internet, how can I achieve that using pfSense.
Does pfSense is the right tool for it?
I tried to limit the upload to 8 bits per second and I can not download right now (it's also got limited).
Does squid can be a good solution for what I searched for?
p.s. I still want to download files via git, http, https, ssh for example yarn install and "composer install" should work.
The goal is to block upload of files outside from the pfSense.
in short, you can't do it with stock pf sense,
You'll need a firewall which can inspect SSL and SSH,
You can run squid proxy on pfsense, and that can sslbump. which can be used to inspect HTTPS traffic. and with squid you can block file upload, for http (and https with sslbump)
If you want to inspect SSH and limit file upload via SSH,
you'll need a Palo Alto or a Fortigate or another next-gen firewall which can inspect SSH.
tl;dr : You can't! But you can use trickle
Explanation
Since every time we create a tcp session - we upload data to the internet, and it doesn't matter if its a 3-way-handshake, http request or post a file to the server, you can not have the ability of creating a session without being able to upload data to the internet. What you can do- is limit the bandwidth per application.
Workaround 1
You can use trickle.
sudo apt-get install trickle
You can limit upload/download for a specific app by running
trickle -u (upload limit in KB/s) -d (download limit in KB/s) application
This way you can limit http/other applications, but still being able to use git.
Workaround 2
Another way to Deny all application from accessing the internet, and allow only applications by exception.
I run Apache over HTTPS and can see in the log file that a HTTP/1.1 request is made for every single file of my repository. And for every single file, the full URL is disclosed.
I need to access my repository from a location where I don't want sysadmins to look over my shoulder and see all these individual URLs. Of course I know they won't see file contents since I am using HTTPS or not HTTP, but I am really annoyed they can see URLs and as a consequence, file names.
Is there a way I can hide or encrypt HTTPS urls with SVN?
This would be great, as I would prefer not having to resort using svn+ssh, which does not easily/readily support path-based authorization, which I am heavily using.
With HTTPS, the full URL is only visible to the client (your svn binary) and the server hosting the repository. In-transit, only the hostname you're connecting to is visible.
You can further protect yourself by using a VPN connection between your client and the server, or tunneling over SSH (not svn+ssh, but an direct ssh tunnel).
If you're concerned about the sysadmin of the box hosting your repository seeing your activity in the Apache logs there, you have issues far beyond what can be solved with software. You could disable logging in Apache, but your sysadmin can switch it back on or use other means.
Last option: if you don't trust the system(s) and/or network you're on, don't engage in activities that you consider sensitive on them. They can't see something that isn't happening in the first place.
I belong to a team of several people, divided throughout the world.
We are building software for a certain platform, lets call it "Platform S". To develop for this platform there are 2 resources that are necessary. One is the SDK, which can be installed only using certain installer, that connects directly with the Platform S centrail server, and install it in out machine. The other resource is the developer website, where people post their questions and doubts about the SDK and hardware it operates on itself.
The problem comes that to connect to these resources, both the forums and to download/update the SDK, I need to have always the same IP address.
To solve this problem, I initially created a server with a fixed IP and installed proxy software in it, so that we could configure our local machines to connect to the proxy, and all have the same IP address.
Of course, to avoid that this proxy were used with nefarious purposes by hackers, and others, I protected the proxy with a password. When accessing the forums, this was no problem, as the browser opened a small dialogue window, to ask me for the user and password. But the installer that is in charge of installing/updating the SDK does not offer me this dialogue window. The last time, I disabled the password for a while, since SDK updating is not a task that one does that often, but after just a couple hours, I already got a notice from my server provider to warn me that the server was being used for malicious purposes. So that meant that this solution was not appropiate.
What approach could I take to solve my problem? Is the proxy idea the wrong way to go?
woocommerce webhooks aren't firing at all for me, even on a fresh install. I did the following:
Create a new MySQL database
Install WP from the zip file.
Set up WP.
Install Woocommerce.
Enable REST API and create a key.
Added "Coupon created" webhook, made sure it's set to active, and set it to a publicly accessible site.
When I create a coupon, the webhook does not fire, and no entry is created in the log. I tried this with orders as well and also doesn't work.
I think it's a machine configuration problem, but not sure what to change. The machine is an EC2 instance and has all ports opened in its security group policy.
Weirdest of all is that on a different EC2 instance does work, but it's a production machine and I want to have a dev server work so I can test out things. The only config differences between the production and dev machines that I can think of are the subnets and the firewall, but I don't understand why the subnet should matter and I opened all the firewall ports on the dev machine.
what Linux distributions are you running for prod and dev?
CentOS with SELinux enabled with not allow HTTPD scripts and modules to connect to network by default.
setsebool -P httpd_can_network_connect on
If above is not valid, please identify network problems by trying connecting to AWS RDS via SSH CLI. If you can open a connection via SSH CLI, the problem will be with your application. If you can't, it will be network problem. First thing to check in that case is AWS RDS security group. For testing you can open 3306 to public.
Let me know how it goes.