rsync with inplace deletes the directory [closed] - rsync

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have rsync executed continuously between 2 systems which has the tcp connection getting interrupted due to known reasons.
In a rare case, it so happens that the entire rsync destination directory is deleted and the data gets synced to alternative location.
The rsync option used is "-rpt -iP --stats --inplace" I read about --inplace being inconsistent with interrupted connection.
-rpt -iP --stats --inplace >> FAILS
Need help to come up with safest approach to avoid inconsistencies in rsync in an environment with frequent connection disruption

If you need a consistent way to syncing whole directory use:
rsync -avz \
--partial \
--partial-dir=.rsync-partial/ \
--delay-updates \
--delete \
--stats \
...
Linux man says about --inplace:
This has several effects: (1) in-use binaries cannot be updated (either the OS will prevent this from happening, or binaries that attempt to swap-in their data will misbehave or crash), (2) the file's data will be in an inconsistent state during the transfer, (3) a file's data may be left in an inconsistent state after the transfer if the transfer is interrupted or if an update fails
So, --inplace can not be used for consistent syncing. Instead use delay-updates algorithm that guarantees that destination-dir changes will be implied only after successfully completed transfer.
Also you may decide to use -a instead of -rpt. -a param is equivalent to -rlptgoD and this is complete params set for consistent syncing. -vz params useful for verbose output and compression while transfer (reducing traffic).

Related

Protect/encrypt R package code for distribution [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am writing a package in R and would like to protect/crypt the code. Basically, when looking into my package code, it should be crypted and not readable. I have read that someone has crypted his code(1), however I haven't found any more information about that. I know I could just write the code in C/C++ and compile it, however I would like to let it in R and just "protect" it there.
My question is: Is this possible, how is this possible?
I appreciate your answer!
Reference:
(1) link
Did you try following that thread?
https://stat.ethz.ch/pipermail/r-help/2011-July/282717.html
At some point the R code has to be processed by the R interpreter. If you give someone encrypted code, you have to give them the decryption key so R can run it. Maybe you can hide the key somewhere and hope they don't find it. But they must have access to it in order to generate the plain text R code somehow.
This is true for all programs or files that you run or view on your computer. Encrypted PDF files? No, they are just obfuscated, and once you find the decryption keys you can decrypt them. Even code written in C or C++ distributed as a binary can be reverse engineered with enough time, tools, and clever enough hackers.
You want it protected, you keep it on your servers, and only allow access via a network API.
I recently had to do something similar to this and it wasn't easy. But i managed to get it done. Obfuscating and/or encrypting scripts is possible. The question is, do you have the time to devote to it? You'll need to make sure whichever "obfuscation/encryption" method you use is very difficult and time consuming to crack, and that it doesn't slow down the execution time of the script.
If you wish to encrypt a Rscript code fast, you can do so using this site.
I tested the following rcode using the aforementioned site and it produced a very intimidating output, which somehow worked:
#!/usr/bin/env Rscript
for (i in 1:100){
if (i%%3==0) if (i%%5==0) print("fizzbuzz") else print("fizz") else
if (i%%5==0) print("buzz") else
print(i)
}
If you do have some time on your hands and you wish to encrypt your script on your own, using your own improvised method, you'll want to use the openssl command. Why? Because it appears to be the one encryption tool that is available across most, if not all Unix systems. I've verified it exists on Linux (ubuntu, centos, redhat, mac), and AIX.
The simplest way to use Openssl to encrypt a file or script is:
1. cat <your-script> | openssl aes-128-cbc -a -salt -k "specify-a-password" > yourscript.enc
OR
2. openssl aes-128-cbc -a -salt -in <path-to-your-script> -k "yourpassword"
To decrypt a script using Openssl (notice the '-d'):
1. cat yourscript.enc | openssl aes-128-cbc -a -d -salt -k "specify-a-password" > yourscript.dec
OR
2. openssl aes-128-cbc -a -d -salt -in <path-to-your-script> -k "yourpassword" > yourscript.dec
The trick here would be to automate the supply of password so your users need not specify a password each time they wanted to run the script. Or maybe that's what you want?

SSH-agent working over many servers without retyping? Some flag? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Suppose servers myLaptop, A and B. The same ssh-agent should allow me to go over A and B without readding the ssh-agent in the server A to go to B.
$ eval `ssh-agent`; ssh-add ~/.ssh/mePriv #In myLaptop
$ ssh me#kosh.A.com #Works without typing pwd
$ ssh me#triton.A.com #Won't work, ssh-agent not alive in A?!
$ eval `ssh-agent`; ssh-add ~/.ssh/mePriv; ssh me#triton.A.com #Works, dupe...
where now I have the ssh-agent running in myLaptop and in A. Is there some easy way so that I could only have the ssh-agent set up once in myLaptop without retyping everything again in A?
P.s. I am not sure about technical terms but the same thing I am trying to achieve here to connect to server B through the server A can be done with something like ssh-forwarding/ssh-tunneling, not sure about correct terminology. For this question, focus on ssh-agent. Easiest solution very well appreciated!
Please, see the answer here.
Shortly
run ssh-keygen in your server
move the private-key id_rsa to your laptop's $HOME/.ssh/id_rsa
remove the private key id_rsa from your server
create the following $HOME/.ssh/config in your laptop
run ssh-add $HOME/.ssh/id_rsa
copy the public key to the laptop's $HOME/.ssh/id_rsa.pub
add the public key to the server's $HOME/.ssh/authorized_keys
Have .ssh/config like
Host server.myhomepage.com
User masi
Port 22
Hostname server.myhomepage.com
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes

Rsync not terminating when giving timeout [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
we are trying to setup timeout limit for rsync data transfer, below is the command executing:
rsync --bwlimit=10 --timeout=10 -e ssh -avzr --delete /u01/Oracle/SyncScriptFolder/source xxxxx#xxxxxx:/u01/Oracle/SyncScriptFolder/source --stats -i
based on above command the rsync should stop the execution/transfer if it do not completes in 10 seconds, but it still continues to execute and donot terminates.
Terminate when no data transfer happens between source to destination. Check if there are any files are still updating in the source system.
Check the below bold text from the documentation,
--timeout=TIMEOUT
This option allows you to set a maximum I/O timeout in seconds. If no data is transferred for the specified time then rsync will exit. The
default is 0, which means no timeout.

Squid client purge utility [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I've been using the purge utility ie.
squidclient -m PURGE http://www.example.com/
The above command will purge that exact link but it leaves everything else under it in the cache. (eghttp://www.example.com/page1)
I was wondering is there a way to purge every document under that url?
I've had limited success messing with this line:
awk '{print $7}' /var/log/squid/access.log | grep www.example.com | sort | uniq | xargs -n 1 squidclient -m PURGE -s
First of all thank you KimVais for advising me to ask in serverfault, I have found a solution there.
as answered in serverfault:
The 3rd-party purge utility will do exactly what you seek:
The purge tool is a kind of magnifying glass into your squid-2 cache. You can use purge to have a look at what URLs are stored in which file within your cache. The purge tool can also be used to release objects which URLs match user specified regular expressions. A more troublesome feature is the ability to remove files squid does not seem to know about any longer.
For our accelerating (reverse) proxy, I use a config like this:
purge -c /etc/squid/squid.conf -p localhost:80 -P0 -se 'http://www.mysite.com/'
-P0 will show the list of URLs but not remove them; change it to -P1 to send PURGE to the cache, as you do in your example.
The net-purge gem adds Net::HTTP::Purge to ruby, so you can easily purge your cache.
require 'net-purge'
Net::HTTP.start('417east.com') {|http|
request = Net::HTTP::Purge.new('/')
response = http.request(request)
puts response.body # Guru Meditation
}
I'd like to add that there's no O(1) way to do invalidate multiple objects in Squid cache. See the Squid FAQ for details.
For comparison, Nginx and Apache Traffic Server seem to lack this feature, too. OTOH, Varnish implements banning, which in practice should do what you want.
We have a lot of ways to purge. Example 2 ways I alway use:
With client using MacOS or Linux:
curl -X PURGE http://URL.of.Site/ABC.txt
Direct on server which is running Squid:
squidclient -m PURGE http://URL.of.Site/ABC.txt
Absolutely, squid.conf must add
acl Purge method PURGE
http_access allow localhost Purge
http_access allow localnet Purge
http_access deny Purge
Apache Traffic Server v6.0.0 adds a "cache generation ID" which can be set per remap rule. So you can effectively purge an entire "site" at no cost at all, it really doesn't do anything other than making the old versions unavailable.
This works well with the ATS cache, because it's a cyclical cache (we call it cyclone cache), objects are never actively removed, just "lost".
Using this new option is fairly straight forward, e.g.
map http://example.com http://real.example.com \
#plugin=conf_remap.so \
proxy.config.http.cache.generation=1
To instantly (zero cost) purge all cached entries for example.com, simply bump the generation ID to 2, and reload the configuration the normal way.
I should also say that writing a plugin that loads these generation IDs from some other (external) source other than our remap.config would be very easy.

In Ubuntu, how do I figure out which process is a network pig [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Using top it's easy to identify processes that are hogging memory and cpu, but ocasionally I see my computer's network activity spike, but I'm unable to determine which process is generating the activity. Where is the right place to look for this information?
You can also take a look at "NetHogs": http://nethogs.sourceforge.net/. Little yet very handy utility. Especially if you want to find out which process is taking the bandwidth.
You can install several applications to monitor network traffic in real time. NTOP, tcpdump, trafshow, iptraf.
I would go with NTOP or IPTRAF. But that's just a personal taste.
Also, with Linux's netstat you can use the -p flag to see how many connections is a process using.
You can also use iftop. In Ubuntu you can install it by typing in terminal: sudo aptitude install iftop. To use type: sudo iftop -i eth0, where eth0 is your network interface.
The package 'nmon' provides a comparable tool to top. The design's a bit different since the kernel doesn't provide excellent statistics via /proc.
Description: performance monitoring tool for Linux
nmon is a systems administrator, tuner, benchmark tool.
It can display the CPU, memory, network, disks (mini graphs or numbers),
There's also iftop:
Description: displays bandwidth usage information on an network interface
iftop does for network usage what top(1) does for CPU usage. It listens to
network traffic on a named interface and displays a table of current bandwidth
lsof -i -n -P gives you for each connection the process and the endpoints...
Small correction to Pablo Santa Cruz-
On linux: netstat -p gives the pid of the program running on the port. On BSD: netstat -p is used to specify the protocol.

Resources