Implementing simple software updater using rsync - rsync

I'm trying to find a way to update client software while reducing traffic and update server load.
Case:
Server is just http server that has latest non compressed/packed version of software.
Client uses rsync to download changes
Does server have to run rsync instance/host/service (idk how to call it) in order to produce delta files?
Seen some forum question about downloading files with rsync. It seemed like server didn't need rsync instance. If server isn't running rsync instance is that download gonna be done without delta files?
Do you know other solutions which can reduce network and server load?

The server doesn't need any special software other than a ssh server.
I was incorrect about this for your use case. I believe what you are looking for is daemon mode rsync for the server. This has rsync listen on a port to serve requests.
I misunderstood what you were trying to do at first. However in theory it might still be able to be done with only ssh or telnet, I think the daemon mode is a better solution.
See: SSH vs Rsync Daemon

Related

Is it normal for my router to have the activity on port 111?

What are typical results of nmap 198.168.1.1 for an average Joe? What would be a red flag?
PORT STATE SERVICE
111/tcp filtered rpcbind
What does this mean in context and is it something to worry about?
Basically, RCPBind is a service that enables file sharing over NFS,The rpcbind utility is a server that converts RPC program numbers into universal addresses. It must be running on the host to be able to make RPC calls on a server on that machine. When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve So if you have the use for file sharing, It's fine, otherwise unneeded and are a potential security risk.
You can disable them by running the following commands as root:
update-rc.d nfs-common disable
update-rc.d rpcbind disable
That will prevent them from starting at boot, but they will continue if already running until you reboot or stop them yourself.
And if you are looking to get into the system through this, There are lots of reading material available in the google.

How to limit/disable the upload on the network

I wish to make a secure environment and to block uploading to any destination on the Internet, how can I achieve that using pfSense.
Does pfSense is the right tool for it?
I tried to limit the upload to 8 bits per second and I can not download right now (it's also got limited).
Does squid can be a good solution for what I searched for?
p.s. I still want to download files via git, http, https, ssh for example yarn install and "composer install" should work.
The goal is to block upload of files outside from the pfSense.
in short, you can't do it with stock pf sense,
You'll need a firewall which can inspect SSL and SSH,
You can run squid proxy on pfsense, and that can sslbump. which can be used to inspect HTTPS traffic. and with squid you can block file upload, for http (and https with sslbump)
If you want to inspect SSH and limit file upload via SSH,
you'll need a Palo Alto or a Fortigate or another next-gen firewall which can inspect SSH.
tl;dr : You can't! But you can use trickle
Explanation
Since every time we create a tcp session - we upload data to the internet, and it doesn't matter if its a 3-way-handshake, http request or post a file to the server, you can not have the ability of creating a session without being able to upload data to the internet. What you can do- is limit the bandwidth per application.
Workaround 1
You can use trickle.
sudo apt-get install trickle
You can limit upload/download for a specific app by running
trickle -u (upload limit in KB/s) -d (download limit in KB/s) application
This way you can limit http/other applications, but still being able to use git.
Workaround 2
Another way to Deny all application from accessing the internet, and allow only applications by exception.

Azure DevOps Pipeline Task to connect to Unix Server and execute commands

I am seeking to set up a Release Pipeline in Azure DevOps Services that will deploy
an application to a Unix server, where it then executes some unix commands as part
of the deployment.
Would appreciate some guidance on which pipeline Task(s) I can set up to therefore
achieve the following objectives:
Connect to the Unix server.
Execute the required Unix commands.
By the way, the Agents are currently installed on Windows hosts but we are looking to
extend that to Unix servers in due course, so a solution that fits both setups would
be ideal, even though the former is the priority.
You can check out task SSH Deployment task.
Use this task to run shell commands or a script on a remote machine using SSH. This task enables you to connect to a remote machine using SSH and run commands or a script.
If you need to copy files to the remote linux server. You can check out Copy Files Over SSH task.
You probably need to create a SSH service connection. See steps here to create as service connection.
In the end, due to concerns raised about the install of private keys on the target server which is part of the SSH Deployment setup, we opted for the use of Deployment Groups which has enabled us to set up a persistent connection to our Linux server.

My wordpress websites on VPS server is getting hacked regularly

I have purchased VPS server from OVH. I have installed Vestacp, it has been more than 6 months and I'm still facing issues with server security. Sometimes my Wordpress websites get hacked, sometimes the server is slow or not responding for a whole day. I'm not able to identify issue. Someone please. help me.
Here is a basic checklist to get you started:
Download and run WPScan against your site you can obtain this here.
Change all your passwords, Since it's a virtual private server your pem file might of been compromised. So change your password for all access to the site.
Update all your plugins, I can't stress enough and I see businesses do this all the time, they don't update their plugins. Make sure you are updated to the latest wordpress version as well.
If you website is beyond repair at this time download all the files and then do a fresh install of Wordpress and restore what you can.
Invest in a SSL Certificate to encrypt your data, this will protect you and your users from MITM (man in the middle) attacks.
Update your .htaccess file with restrictions try these.
If you don't have an IDS/IPS to detect SQL injection consider installing ModSecurity, you can download that here.
Since it's a virtual private server if a backdoor has been planted you might want to consider as well doing a full wipe and restore of files you know are secure.
Close ports you don't need. If you don't use certain ports all the time close them.
Update the webserver applications, apache, mysql, and others. If you don't have the latest version you should be able to download them manually and if it's Linux just compile and run the latest source.
For all the countries that don't mean anything to your business block all of them with a country blocking plugin but make sure it's secure, the key is to do your research.
Install something like WPSecurity and limit the amount of failed logins before being locked out or having the ip address blocked for certain usernames after so many failed attempts.
If it's a Linux VPS try these commands to see what your server is up to:
#Check for remote connections
netstat -a
#Allows to monitor network usage by application
nethogs eth0
#Allows to monitor the system log for authorizations
tail -f /var/log/auth.log
#Allows to monitor firewall log
tail -f /var/log/ufw.log
#Allows to monitor packets (look for malformed ones)
tshark -i eth0
You should be doing incident response at this time more than anything since it's a VPS server. There are some great methodologies on this website that may help as well.
Hope this helps.
--lillypad

Any issues running a daemon via XSP2?

We want to run a daemon that exposes itself via ASMX, using Mono 2.0 (or later). Instead of dealing with the ASP.NET hosting APIs, we're thinking about just starting a daemon thread in the Application_Start event. XSP2 shouldn't restart the appdomain, so our daemon will be safe.
Are there any downsides to this (besides being a bit odd)? Any other approaches that allow us to have our code running in the same appdomain as the ASMX requests?
Why do need XSP to run a daemon through calling an ASXM when you can just build a shell console application (with the same code or accepting arguments)? That can be called in terminal or called from any shell script and added to cron. Simple no server required to do this.
If you want to do this, not the way I would do it, you can setup a basic server instance (using nginx, lighty or apache) listing in a certain internal port, add that server to a dummy host and on cron/shell script you can do
WGET http://dummyhost/mydaemon.asmx

Resources