I (as everybody )))) try to mount NFS folder on client while keeping UIDs on CentOS 6.5.
So I have user test with uid 10000 on server (useradd -u 10000 -g 9999 test), that has files belonging to him. I export folder with no_all_squash option.
After that I create user test with uid 10000 on client, mount NFS folder but ls -ln shows files owner 99 (nobody) until client reboot.
After reboot all works fine, client sees files with uid 10000. It seems that client side kernel somehow doesn't update user list/cache.
The same behavior on user delete - until reboot it shows right UIDs (though user already deleted), after reboot - 99.
Because the case in question not regular user, but system that created/deleted dynamically reboot is by no means not options. Any ideas - some config reload, etc.?
Actually what will be well is to see real UIDs on server, despite user existence on client.
Thanks.
can be solved by cleaning uid mapping cache on the client machines:
/usr/sbin/nfsidmap -c
you can see invalid entries in /proc:
cat /proc/keys | grep 3$
more info about the underlying technology:
https://www.kernel.org/doc/Documentation/security/keys.txt
https://www.kernel.org/doc/Documentation/filesystems/nfs/idmapper.txt
also mentioned on serverfault
Related
There are 2 nfs shares on our linux redhat servers to host the live and landing data. There is keytab refreshed ( ticket produced) every 30 minutes , apparently it give system account an access to those 2 shared drives. If key tab ticket is not valid then i guess we would get key is expired error on browsing those 2 nfs locations.
This is what documented as a part of handover from other team. I don't have test env and I got to replace the keytab from rc4 to aes but my problem is that I don't know how keytab is associated with those 2 locations? . It seems it encrypts the locations and allow the access to them with keytab only.
Do I need to change any conf file to replace keytab from rc4 to aes. Kerb cong has already got entry to allow new encryption types aes128.
Unix , nfs storage and ad teams are not giving me answer and i am new to all of this. I have read online that there is sssd.conf file that could be used in conjunction with kerb conf . Can you give me direction from your experience
I'm trying to set up cloud hosting with Digital Ocean.
Please skip to the bold part with asterisks (***) for the actual problem. Everything below here, above that part is background info.
I need to generate an RSA key pair, so I navigate to my cd ~/.ssh/ directory, then:
ssh-keygen -t rsa
I already have existing id_rsa and id_rsa.pub files, so when prompted:
Enter file in which to save the key (/demo/.ssh/id_rsa):
I enter the following to create a new pair:
~/.ssh/id_cloudhosting
I'm then asked for a passphrase, which I simply press return for "no password":
Enter passphrase (empty for no passphrase):
I repeat the above for confirmation, and the final output looks as follows (just a demo image):
Now that I have two new files, id_cloudhosting and id_cloudhosting.pub I need to copy the contents of the public file to my Digital Ocean hosting 'Add SSH console'. I do that like so:
cat ~/.ssh/id_cloudhosting.pub
Which returns the contents of the file:
ssh-rsa
bUnChOFcOd3scrambledABCDEFGHIJKLMNOPQRSTUVWXYZnowIknowmy
ABCnextTIMEwontyouSINGwithmeHODOR demo#a
I paste the key into my hosting console and it saves successfully.
The next step is where the permission issues start: ****************
I need to "spin up a new server" - step four from their docs. So I enter the following:
cat ~/.ssh/id_worker.pub | ssh root#[my.hosting.ip.address] "cat >> ~/.ssh/authorized_keys"
Which should copy the public key as root to a newly created file called authorized_keys
This step never gets created because I'm immediately asked for a password to my host. I didn't ever create one! I pressed return (or enter) at that point, so I do the same when prompted, and get permission denied!
root#[host.ip.address]'s password:
Permission denied, please try again.
root#[host.ip.address]'s password:
Permission denied, please try again.
root#[host.ip.address]'s password:
Permission denied (publickey,password).
How can I rectify these permission denied issues?
EDIT: FIX BELOW
It seems as though, by using an unconventional (other than id_rsa) file, I needed to explicitly identify the file by doing the following:
ssh root#droplet.ip.address -i /path/to/private_key_file
...be sure not to use the public_key_file there. I am not connected to the server from my terminal. This is after destroying my previous droplet, creating a fresh one, with fresh key files, as #will-barnwell suggested
Assuming you have followed the linked guide up to and through Step Three, when you create a new server from their Web UI use the "Add SSH Keys" option and select the key you added to your account previously.
When actually spinning up a new server, select the keys that you would
like installed on your server from the "Create a Droplet" screen. You
can select as many keys as you like:
Once you click on the SSH key, the text saying, "Your
root password will be emailed to you" will disappear, and you will not
receive an email confirmation that your server has been created.
The command you were using was to add an ssh key to pre-existing server. Judging from the above quote I bet the password that you are being prompted for is in your email.
Why?
When you create a server on Digital Ocean ( or really most cloud hosting services ) a root password is automatically generated for you, unless you set the server up with an authorization key.
Using key authentication is definitely a good security choice, but make sure to read the instructions carefully, don't just copy/paste commands and expect it all to work out.
EDIT: OP's comments on the question have shed additional light on the matter.
New Advice: Blow your server away and set up the SSH keys as suggested, your server is probably unusable if it is not accepting your old SSH key and is prompting you for a password you don't have.
Be careful messing around with your last auth key, add a new one before removing an old one.
I'm hoping that someone can help me here. With Centos 7 all the install docs I have found said to use MariaDB instead of mysql which is fine, but I can't seem to enable remote access.I have used the "GRANT ALL ON . to user#'address' IDENTIFIED BY 'your-root-password';" and flushed privileges and restarted the service. I still am not able to connect via remote terminal I get ERROR 1045 (28000): Access denied for user username.
So I found another article that said I should go to my my.cnf file and make sure my bind settings are set correctly and such.
https://mariadb.com/kb/en/mariadb/documentation/getting-started/configuring-mariadb-for-remote-client-access/
Based on what this article shows my my.cnf file is completely different than what it should be. Doesn't contain bind-address or skip-networking or port or anything. It looks like the below.
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
I was wondering if anyone else might know what's going on with this. Thanks.
You may want to investigate this link:
http://www.versatilewebsolutions.com/blog/2015/05/dealing-with-mariadb-plugin-unixsocket-is-not-loaded.html
Essentially, MariaDB switched to unix_socket authentication as a default authentication plugin. To get around it, connect to your MariaDB server and execute these queries:
USE mysql;
UPDATE user SET plugin='' WHERE User = 'root';
FLUSH PRIVILEGES;
exit
This will disable socket authentication and switch back to standard authentication. I do not think this is recommended for production.
I installed cloud-in-a-box/fastrack of Eucalyptus and am able to create instance and log into it. But when trying sudo, sudo su - or login in as root I'm asked for a password. I'm not sure what the password might be. Does anyone know what the default password for the Image is?
I think this is how the image is designed. It uses the cloud-user account only and has no root access, nor does it allow sudo.
There are other starter images available that can be "installed" that have sudo as root enabled. In those cases you simply issue
sudo su -
and you become root.
To see what is easily available use:
eustore-describe-images
As a note, some of the other starter images have different accounts (not cloud-user), such as ec2-user. If you don't know which account to use simply try to ssh into the instance as root and it will usually get a message back telling you:
Please login as the user "ec2-user" rather than the user "root".
I am not sure if there is a password on the root account in that image. Regardless, the recommended way to log into instances is by creating an SSH key (euca-create-keypair KEYNAME >KEYNAME.pem), specifying it when running an instance (euca-run-instance -k KEYNAME), and then logging in using the key generated (ssh -i KEYNAME.pem root#INSTANCE-IP). You'll probably have to change the permissions on that .pem file before SSH will allows you to use it (chmod 0600 KEYNAME.pem). The instance obtains the public portion of the key from the cloud at boot time and adds it to the authorized_keys file.
Is there a way to sync the images folder between my live server and the staging server? so when a new image is added to the live server it would be copied automatically to the staging.
Im currently on rackspace servers "both of them".
You haven't mentioned what operating system you're using, or how immediate you want this to happen. I would look into using rsync. Set up login using ssh key authentication (instead of password), and add a cron job that runs it regularly.
On live, as the user that does the copying run this command:
ssh-keygen
(Leave the passphrase empty).
Next copy the public key to the staging server (make sure you don't overwrite existing authorized_keys file, if it already exists you have to append id_rsa.pub to that file):
scp ~/.ssh/id_rsa.pub staging-server:.ssh/authorized_keys
Finally set up the cron-job:
echo '15,45 * * * * rsync -avz -e ssh /path/to/images staging-server:/path/to' | crontab -
This runs your script quarter past and quarter to every hour. For more info on the cron format, see the appropriate man page:
man 5 crontab
To understand the rsync options, check the rsync manpage. This command won't remove images on staging when you remove images on your live server, but there are options for that.
Also, remember to run the command manually once as the user in question, to accept ssh server keys and make sure key auth is working.