Merging two dovecot Maildirs - centos6

I recently setup mail server using zpanel (with dovecot and Maildir format) for my domain and created a user email account with user#my-domain.com here.
Here all the emails are stored in following tree in server under /var/zpanel/vmail/ path
my-domain.com
|--> cur/
| ------
|--> new/
| ------
|--> tmp/
------
I have all user email from my old server (in same format as above). Where email server for my domain my-domain.com was be hosted before.
Problem is, I already have few emails on new server for user and I want to merge both so that it show up all email which I sent from New email server and old one.
Is there any way, I can merge these two maildirs?

Sorry, But I figured out how we can do it. Just answering my question here so that will be useful for others as well.
We can do it using simple filesystem merge for directories but after that we need to make sure that we give appropriate permission to all files for vmail user (user might be specific to zpanel).
This is what worked for me . Under /var/zpanel/vmail/my-domain.com/cur/:
$ sudo chown vmail:mail *
Same we need to do for /new and /tmp directories.

You can use cp -Rp to recursively copy folder contents while maintaining ownership and permissions and then delete the old folder.
For example, to merge "INBOX/something" to "INBOX" with dovecot:
# from the dovecot mail folder
cp -Rp .INBOX.something/* ./
rm -rf .INBOX.something

Related

Mapping between HDFS Daemon and Kerberos Principal and Unix Account

In my organization, to access the hadoop cluster we do the following on the Gateway:
sudo su -
cd /etc/username/
kinit some_string/instance -k -t some_string.keytab
hadoop fs -ls
This works perfectly fine, but I am trying to understand what exactly is going on.
When I do a 'whoami' obviously it shows 'root'. But any files created the above way on HDFS have the owner as 'some_string' and group as 'hdfs'. And I can neither kinit nor access HDFS as any other user. Why is this so?
Is this because: Hadoop's HDFS daemon is mapped to the kerberos principal (and that principal's ticket is only accessible to me as a root user?) And that principal is also mapped to the OS account some_string which is what i see as owner of the files on HDFS? If so where is the link defined (hadoop daemon to principal to os account)
I tried googling around a lot but could not find a definitive answer to my confusion. Even when I log in to HUE with my own user, I do not have write access to these files, which is also something I want to understand how to resolve.
Thanks.
Edit:
$ klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: some_string/instance#CLOUDERA.xxxx.CORP
Valid starting Expires Service principal
03/02/16 21:06:19 03/03/16 21:06:19 krbtgt/CLOUDERA.xxxx.CORP#CLOUDERA.xxxx.CORP
renew until 03/02/16 21:06:19
So when you are executing below command
kinit some_string/instance -k -t some_string.keytab
You are requesting for ticket of the principal which is stored in your some_string.keytab file which you can look at by using below command
klist -k some_string.keytab
It will show you output with principal name and version. Keytab files contains password as well so it dose not ask for password.
And the second question will be answer from the klist command it will show you the principal which is like user/_host#realm so in your case user is some_string, and when you get ticket of some_string user you are some_string for kerberos and your commands will be executed as some_string user so the owner of files created will be some_string.
Also you can list the tickets which you already have using klist command see below the output:
[root#myhostname ~]# klist -k some_Name.keytab
Keytab name: FILE:some_Name.keytab
KVNO Principal
---- --------------------------------------------------------------------------
1 myuser/myhostname#MYREAL.COM
Here my keytab is of myuser user and host is myhostname host.

LEMP + wordpress file permissions to be able to edit, upgrade and use sftp client

I am trying to manage file permissions on a debian webserver that runs nginx, so that wordpress can edit, upload and upgrade without having to use ftp. I also want to be able to login using sftp with my user account.
I am aware of the fact that this question has been asked before, see here
or here, but following the steps in those answers hasn't been satisfying. The setup currently looks as follows:
The wordpress folder is in /var/www/html/
I made a new user ("user") and group ("group"). The server user is
"www-data".
All files in the wordpress folder are owned by user:group.
Both "user" and "www-data" are set to belong to "group".
I changed file and folder permissions as follows:
find /var/www/html/ -type d -exec chmod 2775 {} +
find /var/www/html/ -type f -exec chmod 664 {} +
I set the default umask to 0002.
I would have thought this should work, but currently I can edit and upload files from within wordpress, but not update wordpress, functions or themes.
It also does not work with "group" set as default group for "user" and/or "www-data" (by editing /etc/passwd).
Alternatively, I made all files in /var/www/html/ owned by user:www-data, but also without success.
The only way I seem to get wordpress to update without ftp is by making the wordpress-folder and all its files owned by "www-data". Unfortunately, the result of that is that I cannot upload files using an sftp-client (because the target is now a folder that is not owned by "user").
How can this be? As far as I understand these steps should give wordpress the proper permissions, but something still is wrong.
Your help would be greatly appreciated.
On a debian server I followed these steps. It might not be the most secure solution as I read here, but it works (wordpress can edit, upload and upgrade - and I can upload using sftp).
Create a new user "user"
Create a new group "group" (you can choose to use www-data as group as well)
Add user and www-data to group
usermod -G group user
usermod -G group www-data
Check group numerical id in /etc/group
e.g. group:x:1002
Change default group of www-data and user in /etc/passwd
e.g. user:x:1001:1002:...
In /etc/php5/fpm/pool.d/www.conf (in my case) change group=www-data to ;group=www-data. Now nginx will use the default group of www-data which we just set to "group". Reload service (php5-fpm).
Recursively change owner of your wordpress folder to user:group
chown -R user:group /var/www/html
Change permissions in your wordpress folder (The 2 is to assign new files to the parent folder's group)
find /var/www/html/ -type d -exec chmod 2775 {} +
find /var/www/html/ -type f -exec chmod 664 {} +
Change umask to UMASK 0002 in /etc/login.defs
In wordpress, enforce direct upload (so no ftp) by adding define('FS_METHOD','direct'); to wp-config.php. In my case, this was an essential step.
To get things working, I needed to reboot.
I ran into this issue and I figured that I would share how I fixed it on Ubuntu running PHP 7 in case it can help someone. I adapted the following after reading this article that outlines how it is done with PHP 5.
Nginx needs to be optimized with PHP pools in order to give ownership of files and folders to users.
First, you need to create a new PHP-FPM memory pool. Do this by copying the default memory pool and renaming it with the user that you want to associate it with:
sudo cp /etc/php/7.0/fpm/pool.d/www.conf /etc/php/7.0/fpm/pool.d/username.conf
Edit the file:
sudo nano /etc/php/7.0/fpm/pool.d/username.conf
Go through the file and change username in the following locations:
; Start a new pool named 'www'.
; the variable $pool can we used in any directive and will be replaced by the
; pool name ('www' here)
[username]
; Note: The user is mandatory. If the group is not set, the default user's group
; will be used.
user = username
listen = /run/php/php7.0-fpm.username.sock
Now you need to update your server block(s). You will need to adjust to the correct sockets to allow access to the newly created pool.
Open your server configuration file:
sudo nano /etc/nginx/sites-available/default
Or if you setup server blocks (virtual hosts), then:
sudo nano /etc/nginx/sites-available/example.com
Edit the following line and replace username:
fastcgi_pass unix:/run/php/php7.0-fpm.username.sock;
Finally, restart Nginx:
sudo service nginx restart

Syncing local and remote directories using rsync+ssh+public key as a user different to the ssh key owner

The goal is to sync local and remote folders over ssh.
My current user is user1, and I have a password-less access setup over ssh to a server server1.
I want to sync local folder with a folder on server1 by means of rsync utility.
Normally I would run:
rsync -rtvz /path/to/local/folder server1:/path/to/remote/folder
ssh access works as expected, rsync is able to connect over ssh, but it returns "Permission denied" error because on server1 the folder /path/to/remote/folder is owned by user2:user2. File permissions of the folder do not allow it to be altered by anyone else.
user1 is a sudoer on server1 so sudo su - user2 works during ssh session.
How to forse rsync to switch the user when it ssh'ed to the server?
Adding user1 to the group user2 is not an option because all user/group management on the server is done automatically and replicated from a central repo every X mins, that I have not access to.
Same for changing permissions/ownership of the destination folder: it is updated automatically on a regular basis with a reset of all permissions.
Possible solution coming to my mind is a script that syncs the local folder with a temporary intermediate remote folder owned by user1 on the server, and then syncs two remotes folders as user2.
Googling for a shorter and prettier solution did not yield any success.
I have not tried it by myself, but how about using rsync's '--rsync-path' option?
rsync -rtvz --rsync-path='sudo -u user2 rsync' /path/to/local/folder server1:/path/to/remote/folder
To fix the permissions problem you need to run rsync over an an SSH session that logs in remotely as user2:
rsync avz -e 'ssh -i privatekeyfile' /path/to/local/folder/ user2#server1:/path/to/local/folder
The following answer explains how to setup the SSH keys.
Ant, download fileset from remote machine
Set up password-less access for user1 to access user2#server1, then do:
rsync -rtvz /path/to/local/folder user2#server1:/path/to/remote/folder

What's the syntax and prerequisite for --password-file option in rsync?

I want to store --password-file option that comes with rsync. I don't want to use ssh public_private key encryption. I have tried this command:
rsync -avz --progress --password-file=pass.txt source destination
This says:
The --password-file option may only be used when accessing an rsync daemon.
So, I tried using:
rsync -avz --progress --password-file=pass.txt source destination rsyncd --daemon
But this return various errors like unknown options. Is my sytanx correct? How do I setup rsync daemon in my Debian machine.
That is correct,
--password-file is only applicable when connecting to a rsync daemon.
You probably haven't set it in the daemon itself though, the password you set and the one you use during that call must match.
Edit /etc/rsyncd.secrets, and set the owner/group of that file to root:root with world reading permissions.
#/etc/rsyncd.secrets
root:YourSecretestPassword
To connect to a rsync daemon, use a double colon followed by the module name, and the file or folder to synchronize (instead of a colon when using SSH),
RSYNC_PASSWORD="YourSecretestPassword"; rsync -rtv user#remotehost::module/source/ destination/
NOTE:
this implies abdicating SSH encryption, though the password itself is not sent across the network in plain text, your data is ...
this is already insecure as is, never as the the same password as any of your users account.
For a better understanding of its inner workings (how to give specific IPs/processes the ability to upload to specified areas of the filesystem without the need for a user account): http://transamrit.net/docs/rsync/
After trying a while, I got this to work. Since Im copying from my live server (and routers data) to my local server in my laptop as backup user no problem with password been unencrypted, its secured wired on my laptop at home. First you need to install sshpass if Centos with yum install sshpass then create a user backup and assign a temp password. I listed the -p option in case your ssh port is different than default.
sshpass -p 'password' rsync -vaurP -e 'ssh -p 2222' backup#???.your.ip.???:/somedir/public_data/temp/ /your/localdata/temp
Understand SSH RSA is a better permanente alternative and all that, but this is a quick alternative to backup and restore on the go. It works if you are not too concern about security but more concern about your data been backup locally as in an emergency o data recovery. Your user backup password you can change it once the backup is completed. Its a lot faster to setup when your servers change IPs, users, and its in constant modifications (as routers change config and non static IPs, also when routers are not local and you are backing up clients servers locally, where you dont have always access to do SSH. Some of my clients dont even have SSH installed and they dont want to hassle with creating public keys. On some servers only where you have access on a temporary basis. By the way, if you want to do the restore, just reverse the case. Dont need change much, from the same command shell you can do it reversing the order of target and source directories, and creating another backup user with same temp password on the target. After finish, you delete the backup user or change its passwords on target and/or source servers. You can protect even further, as I have done, replacing the password for a one line file using a bash script for multi server environment. Alternative is to use the -f option so the password does not show in the bash history -f "/path/to/passwordfile" Regards
NOTE: If you want to update only modified files then you should use this parameters -h -v -r -P -t as described here https://unix.stackexchange.com/questions/67539/how-to-rsync-only-new-files
rsync -arv -e \
"sshpass -f '/your/pass.txt' ssh -o StrictHostKeyChecking=no" \
--progress /your/source id#IP:/your/destination
Maybe you have to install "sshpass" if you not.

How to change the owner for a rsync

I understand preserving the permissions for rsync.
However in my case my local computer does not have the user the files need to under for the webserver. So when I rsync I need the owner and group to be apache on the webserver, but be my username on my local computer. Any suggestions?
I wanted to clarify to explain exactly what I need done.
My personal computer: named 'home' with the user account 'michael'
My web server: named 'server' with the user account 'remote' and user account 'apache'
Current situation: My website is on 'home' with the owner 'michael' and on 'server' with the owner 'apache'. 'home' needs to be using the user 'michael' and 'server' needs to be using the user 'apache'
Task: rsync my website on 'home' to 'server' but have all the files owner by 'apache' and the group 'apache'
Problem: rsync will preseve the permissions, owner, and group; however, I need all the files to be owner by apache. I know the not preserving the owner will put the owner of the user on 'server' but since that user is 'remote' then it uses that instead of 'apache'. I can not rsync with the user 'apache' (which would be nice), but a security risk I'm not willing to open up.
My only idea on how to solve: after each rsync manually chown -R and chgrp -R, but it's a huge system and this takes a long time, especially since this is going to production.
Does anyone know how to do this?
Current command I use to rsync:
rsync --progress -rltpDzC --force --delete -e "ssh -p22" ./ remote#server.com:/website
If you have access to rsync v.3.1.0 or later, use the --chown option:
rsync -og --chown=apache:apache [src] [dst]
More info in an answer from a similar question here: ServerFault: Rsync command issues, owner and group permissions doesn´t change
There are hacks you could put together on the receiving machine to get the ownership right -- run 'chmod -R apache /website' out of cron would be an effective but pretty kludgey option -- but instead, I'd recommend securely allowing rsync-over-ssh-as-apache.
You'd create a dedicated ssh keypair for this:
ssh-keygen -f ~/.ssh/apache-rsync
and then take ~/.ssh/apache-rsync.pub over to the webserver, where you'd put it into ~apache/.ssh/authorized_keys and carefully specify the allowed command, something like so, all on one line:
command="rsync --server -vlogDtprCz --delete . /website",from="IP.ADDR.OF.SENDER",no-port-forwarding,no-X11-forwarding,no-pty ssh-rsa AAABKEYPUBTEXTsVX9NjIK59wJ+fjDgTQtGwhATsfidQbO6u77dbAjTUmWCZjKAQ/fEFWZGSlqcO2yXXXXXXXXXXVd9DSS1tjE6vAQaRdnMXBggtn4M9rnePD2qlR5QOAUUwhyFPhm6U4VFhRoa3wLvoqCVtCV0cuirB6I45On96OPijOwvAuz3KIE3+W9offomzHsljUMXXXXXXXXXXMoYLywMG/GPrZ8supIDYk57waTQWymUyRohoQqFGMzuDNbq+U0JSRlvLFoVUZ5Piz+gKJwwiFwwAW2iNag/c4Mrb/BVDQAyEQ== comment#email.address
and then your rsync command on your "home" machine would be something like
rsync -av --delete -e 'ssh -i ~/.ssh/apache-rsync apache#server' ./ /website
There are other ways to skin this cat, but this is the clearest and involves the fewest workarounds, to my mind. It prevents getting a shell as apache, which is the biggest security concern, natch. If you're really deadset against allowing ssh as apache, there are other ways ... but this is how I've done it.
References here: http://ramblings.narrabilis.com/using-rsync-with-ssh, http://www.sakana.fr/blog/2008/05/07/securing-automated-rsync-over-ssh/
Last version (at least 3.1.1) of rsync allows you to specify the "remote ownership":
--usermap=tom:www-data
Changes tom ownership to www-data (aka PHP/Nginx). If you are using Mac as the client, use brew to upgrade to the last version. And on your server, download archives sources, then "make" it!
The solution using rsync --chown USER:GROUP [src] [dst] only works if the remote user has write access to the the destination directory which in most cases is not the case.
Here's another solution:
Overview
(srcmachine) (rsync) (destmachine)
srcuser -- SSH --> destuser
|
| sudo su jenkins
|
v
jenkins
Let's say that you want to rsync:
From:
Machine: srcmachine
User: srcuser
Directory: /var/lib/jenkins
To:
Machine: destmachine
User: destuser to establish the SSH connection.
Directory: /tmp
Final files owner: jenkins.
Solution
rsync --rsync-path 'sudo -u jenkins rsync' -avP --delete /var/lib/jenkins destuser#destmachine:/tmp
Read more here:
https://unix.stackexchange.com/a/546296/116861
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhnP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
-n : perform a trial run with no changes made, to really execute the command remove the -n option

Resources