rsync in a FreeBSD jail: failed to set times: Operation not permitted - rsync

I have a single "partition" ZFS pool mounted to a directory inside /jails/www/usr/local/www/stuff (that is served by nginx) and from inside that jail I have chown'd that directory to a particular user. I have rsync periodically updating that directory from a remote server. Files are syncing fine, however there is a persistent error:
rsync: failed to set times on "/usr/local/www/stuff/file": Operation not permitted
What am I missing here?

Wasn't aware that chown doesn't touch symlinks themselves by default. Doing chown -hR /usr/local/www/stuff solved it.

Related

Not able to mount directory after updating fstab

On a Windows file share I created a folder and shared it with a service account.
I have added a mount point entry to fstab like this:
//server/folder /mnt/folder cifs credentials=/root/creds/creds,noperm
Where the creds file contains credentials for above mentioned service account.
Then run mount -a to activate the mount point.
It gives an error like:
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Couldn't chdir to /mnt/folder: No such file or directory
I have tried mounting the directory manually and receive the same error.
What the heck am I missing?
Ah. I had not created folder in /mnt/folder on the UNIX side. Did mkdir /mnt/folder.
I was then able to see folder in /mnt but it was empty.
Did a mount -a and can now see the contents of the Windows share in /mnt/folder
I seems like I had to do mount -a from /etc as when I issued the command from /mnt it just hung for a while until I killed it and then tried from /etc

Error happens when trying to umount the Lustre file system

When I umount Lustre FS it displays:
[root#cn17663-ens4 mnt]# umount /mnt/lustre
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
and if I add the force option -f it gives the same result:
[root#cn17663-ens4 mnt]# umount /mnt/lustre -f
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
When I try to list the directory it gives me :
[root#cn17663-ens4 mnt]# ls
ls: cannot access lustre: Cannot send after transport endpoint shutdown
lustre
and I cannot find what the reason is and cannot solve it.
Did you actually try running lsof /mnt/lustre (as the error message recommends) to see what is using the filesystem? This problem is not unique to Lustre, but true of any local filesystem as well - if there is a process using the filesystem (current working directory or open file) then it can't be unmounted until that process stops using it (cd out of /mnt/lustre or close the open file(s)).
I find I can use umount -l /mnt/xx to solve this problem!

Deleting temporary directories from weblogic server

Before starting weblogic server I want to delete temporary directories that weblogic creates, I see below directories in my Admin Server:
path: user_projects/domains/my_domain/servers/AdminServer
cache
sysman
adr
data
logs
tmp
and below for my Managed Server:
path: user_projects/domains/my_domain/servers/MyManagedServer
cache
data
logs
stage
tmp
I have installed my WLS 10.3.6 in production mode on my Linux box, what all directories I can delete? Also I read that deleting some files here will make WLS to re-deploy the applications once again, is that true?
I am new to using weblogic so confused on if I delete any files will cause any issues.
As G. Demecki mentions, do not delete the data directory on the admin server as you will lose the embedded LDAP that weblogic uses for the DefaultAuthenticator.
tmp and cache directories are fine to remove, do it when the server is not running.
You can delete tmp, cache safely, but do not delete log, if you don't want log folder just take a back up or rename the folder[mv log log.bkp.date] and then delete the original log folder. Sometimes if the server stops running as admin you have to check the log files, or provide log files if any one asks in your work environment.
Try to delete .DAT and .LOK files with this code:
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/bi_server1/tmp/bi_server1.lok
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/bi_server1/data/ldap/ldapfiles/EmbeddedLDAP.lok
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/bi_server1/data/store/default/_WS_BI_SERVER1000000.DAT
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/bi_server1/data/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/AdminServer/data/store/default/_WLS_ADMINSERVER000000.DAT
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/AdminServer/data/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
rm -f /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/servers/AdminServer/tmp/AdminServer.lok
The cache directory only needs to delete these two directories.
cache
tmp

Syncing local and remote directories using rsync+ssh+public key as a user different to the ssh key owner

The goal is to sync local and remote folders over ssh.
My current user is user1, and I have a password-less access setup over ssh to a server server1.
I want to sync local folder with a folder on server1 by means of rsync utility.
Normally I would run:
rsync -rtvz /path/to/local/folder server1:/path/to/remote/folder
ssh access works as expected, rsync is able to connect over ssh, but it returns "Permission denied" error because on server1 the folder /path/to/remote/folder is owned by user2:user2. File permissions of the folder do not allow it to be altered by anyone else.
user1 is a sudoer on server1 so sudo su - user2 works during ssh session.
How to forse rsync to switch the user when it ssh'ed to the server?
Adding user1 to the group user2 is not an option because all user/group management on the server is done automatically and replicated from a central repo every X mins, that I have not access to.
Same for changing permissions/ownership of the destination folder: it is updated automatically on a regular basis with a reset of all permissions.
Possible solution coming to my mind is a script that syncs the local folder with a temporary intermediate remote folder owned by user1 on the server, and then syncs two remotes folders as user2.
Googling for a shorter and prettier solution did not yield any success.
I have not tried it by myself, but how about using rsync's '--rsync-path' option?
rsync -rtvz --rsync-path='sudo -u user2 rsync' /path/to/local/folder server1:/path/to/remote/folder
To fix the permissions problem you need to run rsync over an an SSH session that logs in remotely as user2:
rsync avz -e 'ssh -i privatekeyfile' /path/to/local/folder/ user2#server1:/path/to/local/folder
The following answer explains how to setup the SSH keys.
Ant, download fileset from remote machine
Set up password-less access for user1 to access user2#server1, then do:
rsync -rtvz /path/to/local/folder user2#server1:/path/to/remote/folder

How to change the owner for a rsync

I understand preserving the permissions for rsync.
However in my case my local computer does not have the user the files need to under for the webserver. So when I rsync I need the owner and group to be apache on the webserver, but be my username on my local computer. Any suggestions?
I wanted to clarify to explain exactly what I need done.
My personal computer: named 'home' with the user account 'michael'
My web server: named 'server' with the user account 'remote' and user account 'apache'
Current situation: My website is on 'home' with the owner 'michael' and on 'server' with the owner 'apache'. 'home' needs to be using the user 'michael' and 'server' needs to be using the user 'apache'
Task: rsync my website on 'home' to 'server' but have all the files owner by 'apache' and the group 'apache'
Problem: rsync will preseve the permissions, owner, and group; however, I need all the files to be owner by apache. I know the not preserving the owner will put the owner of the user on 'server' but since that user is 'remote' then it uses that instead of 'apache'. I can not rsync with the user 'apache' (which would be nice), but a security risk I'm not willing to open up.
My only idea on how to solve: after each rsync manually chown -R and chgrp -R, but it's a huge system and this takes a long time, especially since this is going to production.
Does anyone know how to do this?
Current command I use to rsync:
rsync --progress -rltpDzC --force --delete -e "ssh -p22" ./ remote#server.com:/website
If you have access to rsync v.3.1.0 or later, use the --chown option:
rsync -og --chown=apache:apache [src] [dst]
More info in an answer from a similar question here: ServerFault: Rsync command issues, owner and group permissions doesn´t change
There are hacks you could put together on the receiving machine to get the ownership right -- run 'chmod -R apache /website' out of cron would be an effective but pretty kludgey option -- but instead, I'd recommend securely allowing rsync-over-ssh-as-apache.
You'd create a dedicated ssh keypair for this:
ssh-keygen -f ~/.ssh/apache-rsync
and then take ~/.ssh/apache-rsync.pub over to the webserver, where you'd put it into ~apache/.ssh/authorized_keys and carefully specify the allowed command, something like so, all on one line:
command="rsync --server -vlogDtprCz --delete . /website",from="IP.ADDR.OF.SENDER",no-port-forwarding,no-X11-forwarding,no-pty ssh-rsa AAABKEYPUBTEXTsVX9NjIK59wJ+fjDgTQtGwhATsfidQbO6u77dbAjTUmWCZjKAQ/fEFWZGSlqcO2yXXXXXXXXXXVd9DSS1tjE6vAQaRdnMXBggtn4M9rnePD2qlR5QOAUUwhyFPhm6U4VFhRoa3wLvoqCVtCV0cuirB6I45On96OPijOwvAuz3KIE3+W9offomzHsljUMXXXXXXXXXXMoYLywMG/GPrZ8supIDYk57waTQWymUyRohoQqFGMzuDNbq+U0JSRlvLFoVUZ5Piz+gKJwwiFwwAW2iNag/c4Mrb/BVDQAyEQ== comment#email.address
and then your rsync command on your "home" machine would be something like
rsync -av --delete -e 'ssh -i ~/.ssh/apache-rsync apache#server' ./ /website
There are other ways to skin this cat, but this is the clearest and involves the fewest workarounds, to my mind. It prevents getting a shell as apache, which is the biggest security concern, natch. If you're really deadset against allowing ssh as apache, there are other ways ... but this is how I've done it.
References here: http://ramblings.narrabilis.com/using-rsync-with-ssh, http://www.sakana.fr/blog/2008/05/07/securing-automated-rsync-over-ssh/
Last version (at least 3.1.1) of rsync allows you to specify the "remote ownership":
--usermap=tom:www-data
Changes tom ownership to www-data (aka PHP/Nginx). If you are using Mac as the client, use brew to upgrade to the last version. And on your server, download archives sources, then "make" it!
The solution using rsync --chown USER:GROUP [src] [dst] only works if the remote user has write access to the the destination directory which in most cases is not the case.
Here's another solution:
Overview
(srcmachine) (rsync) (destmachine)
srcuser -- SSH --> destuser
|
| sudo su jenkins
|
v
jenkins
Let's say that you want to rsync:
From:
Machine: srcmachine
User: srcuser
Directory: /var/lib/jenkins
To:
Machine: destmachine
User: destuser to establish the SSH connection.
Directory: /tmp
Final files owner: jenkins.
Solution
rsync --rsync-path 'sudo -u jenkins rsync' -avP --delete /var/lib/jenkins destuser#destmachine:/tmp
Read more here:
https://unix.stackexchange.com/a/546296/116861
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhnP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
-n : perform a trial run with no changes made, to really execute the command remove the -n option

Resources