Is there a way to sync the images folder between my live server and the staging server? so when a new image is added to the live server it would be copied automatically to the staging.
Im currently on rackspace servers "both of them".
You haven't mentioned what operating system you're using, or how immediate you want this to happen. I would look into using rsync. Set up login using ssh key authentication (instead of password), and add a cron job that runs it regularly.
On live, as the user that does the copying run this command:
ssh-keygen
(Leave the passphrase empty).
Next copy the public key to the staging server (make sure you don't overwrite existing authorized_keys file, if it already exists you have to append id_rsa.pub to that file):
scp ~/.ssh/id_rsa.pub staging-server:.ssh/authorized_keys
Finally set up the cron-job:
echo '15,45 * * * * rsync -avz -e ssh /path/to/images staging-server:/path/to' | crontab -
This runs your script quarter past and quarter to every hour. For more info on the cron format, see the appropriate man page:
man 5 crontab
To understand the rsync options, check the rsync manpage. This command won't remove images on staging when you remove images on your live server, but there are options for that.
Also, remember to run the command manually once as the user in question, to accept ssh server keys and make sure key auth is working.
Related
I have a Docker image that has one non-root user created named builder.
The application that supposed to run inside the container uses unix who command.
For some reason it returns empty string inside the container:
builder#2dc3831c558b:~$ who
builder#2dc3831c558b:~$
I cannot use whoami because of implementation details.
(I'm using Docker 1.6.2 on Debian Jessie)
EDIT (additional details regarding why I use "who"):
I use the command who with the parameters am i, that is who am i. This suppose to return the user who first made the login. So, for example, sudo who am i returns builder, while sudo whoami returns root.
The command who includes options like -b: time of last system boot.
Since all commands from a container translates into system calls to the kernel, that would not return anything container related, but docker-host related (ie the underlying host).
See also "Difference between who and whoami commands": whoami prints effective username of being ran whoami, which is not the same as who (printing information about users who are currently logged in).
The current workarounds listed in issue 18547 are:
The registry configuration is stored in the client, so something as simple as cat ~/.docker/config.json will give you the answer you're looking for.
docker info | grep Username should give you this information.
But that is not the same as running the command from within a container session. id -u might be closer.
By default, there is no direct loggin when a container is started by the docker daemon.
As Auzias commented, only a direct ssh connection (initiating a login session) would allow who to return anything. But with docker, this is generally not needed since docker exec (for debug purposes) exists (and spare the image maintainer to include ssh unless it is really needed).
I'm trying to set up cloud hosting with Digital Ocean.
Please skip to the bold part with asterisks (***) for the actual problem. Everything below here, above that part is background info.
I need to generate an RSA key pair, so I navigate to my cd ~/.ssh/ directory, then:
ssh-keygen -t rsa
I already have existing id_rsa and id_rsa.pub files, so when prompted:
Enter file in which to save the key (/demo/.ssh/id_rsa):
I enter the following to create a new pair:
~/.ssh/id_cloudhosting
I'm then asked for a passphrase, which I simply press return for "no password":
Enter passphrase (empty for no passphrase):
I repeat the above for confirmation, and the final output looks as follows (just a demo image):
Now that I have two new files, id_cloudhosting and id_cloudhosting.pub I need to copy the contents of the public file to my Digital Ocean hosting 'Add SSH console'. I do that like so:
cat ~/.ssh/id_cloudhosting.pub
Which returns the contents of the file:
ssh-rsa
bUnChOFcOd3scrambledABCDEFGHIJKLMNOPQRSTUVWXYZnowIknowmy
ABCnextTIMEwontyouSINGwithmeHODOR demo#a
I paste the key into my hosting console and it saves successfully.
The next step is where the permission issues start: ****************
I need to "spin up a new server" - step four from their docs. So I enter the following:
cat ~/.ssh/id_worker.pub | ssh root#[my.hosting.ip.address] "cat >> ~/.ssh/authorized_keys"
Which should copy the public key as root to a newly created file called authorized_keys
This step never gets created because I'm immediately asked for a password to my host. I didn't ever create one! I pressed return (or enter) at that point, so I do the same when prompted, and get permission denied!
root#[host.ip.address]'s password:
Permission denied, please try again.
root#[host.ip.address]'s password:
Permission denied, please try again.
root#[host.ip.address]'s password:
Permission denied (publickey,password).
How can I rectify these permission denied issues?
EDIT: FIX BELOW
It seems as though, by using an unconventional (other than id_rsa) file, I needed to explicitly identify the file by doing the following:
ssh root#droplet.ip.address -i /path/to/private_key_file
...be sure not to use the public_key_file there. I am not connected to the server from my terminal. This is after destroying my previous droplet, creating a fresh one, with fresh key files, as #will-barnwell suggested
Assuming you have followed the linked guide up to and through Step Three, when you create a new server from their Web UI use the "Add SSH Keys" option and select the key you added to your account previously.
When actually spinning up a new server, select the keys that you would
like installed on your server from the "Create a Droplet" screen. You
can select as many keys as you like:
Once you click on the SSH key, the text saying, "Your
root password will be emailed to you" will disappear, and you will not
receive an email confirmation that your server has been created.
The command you were using was to add an ssh key to pre-existing server. Judging from the above quote I bet the password that you are being prompted for is in your email.
Why?
When you create a server on Digital Ocean ( or really most cloud hosting services ) a root password is automatically generated for you, unless you set the server up with an authorization key.
Using key authentication is definitely a good security choice, but make sure to read the instructions carefully, don't just copy/paste commands and expect it all to work out.
EDIT: OP's comments on the question have shed additional light on the matter.
New Advice: Blow your server away and set up the SSH keys as suggested, your server is probably unusable if it is not accepting your old SSH key and is prompting you for a password you don't have.
Be careful messing around with your last auth key, add a new one before removing an old one.
We have increased the -n parameter in broker/db.pf file.We restarted the server and when we check in promon its still showing the same number of users. How do we increase the -n parameter?
I know you answered this yourselves but for future users a real answer can be good. There are several ways to set parameters like -n. This answer really applies to changing all startup parameters (but not what values are "good").
How you change this value depends on how you start your database. See below.
NB 1: you should be aware of your licensing plan before changing this number and contact your sales contact if needed.
NB 2: you should be aware that changing startup parameters can affect performance etc. Test new values in a separate environment before moving them to production.
NB 3: backup all files before messing around...
Managed Database
A managed database is a database that is handled by the AdminServer. OE Management is not needed for this approach. A working installation of OE Explorer is however recommended.
The managed database is started (and stopped etc) via either the web based OE Explorer interface or the dbman command line utility.
Settings are stored in conmgr.properties under your Progress installation. You can edit this file manually (save a copy first...) or via the OE Explorer (recommended way).
You will have a line like this in the file:
maxusers=20 # -n
Edit the number to your liking with your favourite editor.
You can also change this in the OE Explorer:
Log in to OE Explorer. Default location is http://servername:9090/.
Locate and click on the database (if it's not there it's not handled by the adminserver - see below).
Select Configuration
Select Configuration (again, not "servergroup")
Click EDIT
-n (or Max users) is located in the first group of settings ("General"). See picture below.
Edit the value and don't forget to save.
Scripted Database
A scripted database is a database that started with a custom script (or also directly from command line). The actual startup could be handled by crontab, a user, the server generic startup script etc.
The OE AdminServer is not "aware" of this database. (You can make the AdminServer "a little" aware of it by running the dbagent command line utility with certain parameters. Read more about this in the manual).
You could generally divide into two ways of handling the script: with parameters in it or with parameters in a separate parameter file (often with the extension .pf).
Script with parameters in it
With this approach you store all parameters in the actual startup script.
proserve <dbname> -H <hostname> -S <serviceport> -n 10 -B 10000 -spin 10000 etc..
Script with a separate parameter file
With this approach you store the parameters in a separate file.
proserve <dbname> -pf /path/to/file/file.pf
The .pf-file can be formatted like the parameters in the command line:
-db <dbname> -H <hostname> -S <service> etc.
Or with newlines (this allows for comments in the file):
# Main database
-db <dbname>
-H <hostname>
-S <service>
You can also mix these two approaches.
Sources:
OE Management and OE Explorer
OE Database Management
I installed cloud-in-a-box/fastrack of Eucalyptus and am able to create instance and log into it. But when trying sudo, sudo su - or login in as root I'm asked for a password. I'm not sure what the password might be. Does anyone know what the default password for the Image is?
I think this is how the image is designed. It uses the cloud-user account only and has no root access, nor does it allow sudo.
There are other starter images available that can be "installed" that have sudo as root enabled. In those cases you simply issue
sudo su -
and you become root.
To see what is easily available use:
eustore-describe-images
As a note, some of the other starter images have different accounts (not cloud-user), such as ec2-user. If you don't know which account to use simply try to ssh into the instance as root and it will usually get a message back telling you:
Please login as the user "ec2-user" rather than the user "root".
I am not sure if there is a password on the root account in that image. Regardless, the recommended way to log into instances is by creating an SSH key (euca-create-keypair KEYNAME >KEYNAME.pem), specifying it when running an instance (euca-run-instance -k KEYNAME), and then logging in using the key generated (ssh -i KEYNAME.pem root#INSTANCE-IP). You'll probably have to change the permissions on that .pem file before SSH will allows you to use it (chmod 0600 KEYNAME.pem). The instance obtains the public portion of the key from the cloud at boot time and adds it to the authorized_keys file.
I (as everybody )))) try to mount NFS folder on client while keeping UIDs on CentOS 6.5.
So I have user test with uid 10000 on server (useradd -u 10000 -g 9999 test), that has files belonging to him. I export folder with no_all_squash option.
After that I create user test with uid 10000 on client, mount NFS folder but ls -ln shows files owner 99 (nobody) until client reboot.
After reboot all works fine, client sees files with uid 10000. It seems that client side kernel somehow doesn't update user list/cache.
The same behavior on user delete - until reboot it shows right UIDs (though user already deleted), after reboot - 99.
Because the case in question not regular user, but system that created/deleted dynamically reboot is by no means not options. Any ideas - some config reload, etc.?
Actually what will be well is to see real UIDs on server, despite user existence on client.
Thanks.
can be solved by cleaning uid mapping cache on the client machines:
/usr/sbin/nfsidmap -c
you can see invalid entries in /proc:
cat /proc/keys | grep 3$
more info about the underlying technology:
https://www.kernel.org/doc/Documentation/security/keys.txt
https://www.kernel.org/doc/Documentation/filesystems/nfs/idmapper.txt
also mentioned on serverfault