Swisscom CloudFoundry with SSH keys - wordpress

I try to install Wordpress on the Swisscom CloudFoundry application cloud. To install it I need SSH with private and public key pairs (not cf ssh).
I follow the steps here:
https://github.com/cloudfoundry-samples/cf-ex-wordpress
Is this possible? What are the correct values for:
SSH_HOST: user#my-ssh-server.name
SSH_PATH: /home/sshfs/remote

Is this possible?
It depends on your CF provider. This method of running Wordpress requires that you use a FUSE filesystem (SSHFS) to mount the remote files system over the wp-content directory of your Wordpress install. In recent versions of CF (I can't remember exactly where this changed) you are no longer allowed to use FUSE based file systems.
Before you spend a lot of time on this, you might want to validate that your provider still allows FUSE. You can validate with a simple test.
Push any test app to your provider.
cf ssh into the application container.
Check that the sshfs binary is available.
Try using sshfs to mount a remote filesystem (man page | examples).
If you can successfully mount a remote filesystem via SSH using the steps above then you should still be able to use the method described in that example application.
If you cannot, the next best option is to use a plugin that allows storing your media on a remote system. Most of these are for S3. Search google or the WP plugin repo, they're easy enough to find.
There is a better solution on the horizon called Volume Services. You can read more about this here. I have not seen any public CF providers offering volume services though.
What are the correct values for:
SSH_HOST: user#my-ssh-server.name
This should be the user name and host name of your SSH server. This is a server that exists outside of CF. Examples: my-user#192.0.2.10 or some-user#host.example.com. You should be able to ssh <this-value> and connect without entering a password. This is so that the volume can automatically be mounted without user interaction when your app starts.
SSH_PATH: /home/sshfs/remote
This is the full path on the remote server where you'd like to store the Wordpress files. In other words, this directory will be mounted as the wp-content directory of your app.

Related

Running access for Artifactory

New to Artifactory so please bear with me.
Trying (and failing) to create new access token.
The GUI in Artifactory has nothing for this but points to a users guide (https://www.jfrog.com/confluence/display/RTF/Access+Tokens) which talks about managing access tokens through a WAR file.
Here is the blurb:
Access Service
From Artifactory version 5.4, access tokens are managed under a new service
called Access which is implemented in a separate WAR file, access.war. This
change has no impact on how access tokens are used, however, the Artifactory
installation file structure now also includes the added WAR file under the
$ARTIFACTORY_HOME/webapps folder. Artifactory communicates with the Access
service over HTTP and assumes it is running in the same Tomcat using the
context path of "access".
OK, great. So how do I access this thing?
I also don't know much about web apps/servers. Prior to today, I thought WAR was a fight between nations :-)
My Artifactory server proc is running, and I can confirm that the access war file (apparently a jar file of sorts) is in the webapps dir.
I am able to get a artifactory via "http://myserver:8081/artifactory/webapp/#/home".
As it turns out, I believe the interface to manage access tokens is not provided through a gui. Rather, you have to use REST and curl commands.
The documentation mentions:
It is up to the Artifactory administrator to make sure that all participating instances are equipped with the same key pair.
That means you need to have access to the server (where Artifactory is installed).
On that server, the folder where Artifactory is installed is reference ARTIFACTORY_HOME.
That is what is used in the next doc extract:
Start up the first Artifactory instance (or cluster node for an HA installation) that will be in your circle of trust. A private key and root certificate are generated and stored under $ARTIFACTORY_HOME/access/etc/keys.
Copy the private key and root certificate files to a location on your file system that is accessible by all other instances/nodes that are in your circle of trust.
Before bootstrapping, for each of the other instances/nodes, create the $ARTIFACTORY_HOME/access/etc folder and create a properties file in it called access.bootstrap.config with the following contents:
key=/path/to/private.key
crt=/path/to/root.crt
When each instance/node starts up, if the $ARTIFACTORY_HOME/access/etc/access.bootstrap.config file exists, then the private key and root certificate are copied from the specified location into the server's home directory under $ARTIFACTORY_HOME/access/etc/keys.

Deploying a Symfony 2 application in AWS Opsworks

I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.
You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.
Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.

How to access a private github repository from Aegir through Drush Make on AWS

I have an ec2 instance running on AWS. Aegir installed and drush make works perfect for installing new platforms except when I need to download a theme from a private github repository.
drush make doesn't have access to the github private repository and fails the platform install.
How do I overcome this? Is there some fancy way to give drush make or the aegir user ssh keys for the git repository?
I don't know much about ssh-agent, but I figured maybe getting that to run all the time on my server so aegir will have access to my github.
how to make drush make access a private github repository?
Generate an SSH key on the EC2 instance, then add the public key (usually id_rsa.pub) to the private repository as a deoploy key.
stevenh512 is right,a step by step explanation follows:
Disclaimer: I use GitLab + custom VPS on Centos but the same workflow can be applied on any hosting (with ssh) and GitHub (ps I love GitLab for private repos)
Login to VPS as aegir, probably you can’t (if you configure your server tightly secure) so login as root an su aegir
Go to home dir (cd /var/aegir) and check is you have an SSH key. If you have one jump 1 step.
cat ~/.ssh/id_rsa.pub
If you don’t have create one and don’t use a pass-phrase (for more info http://community.aegirproject.org/node/30#SSH_keys but there are solutions if you want a pass-phrase). After the creation you will have the key’s random image. (Study the SSH it's too important for security!)
ssh-keygen -t rsa
Copy the key and then go to you GitHub/Lab->account/profile settings->SSH keys->Add SSK key. For title give anything you want (like: Aegir Key) and for key paste the key from your server.
cat ~/.ssh/id_rsa.pub
Now back to server you must add the Git as known host, we go the easy way: just login with ssh and type yes when it will ask for connection. Ready!!
ssh git#github.com
- or -
ssh git#gitlab.com
Testing: make a .make file and save it somewhere public (like Dropbox, right click copy public url) like:
core = 7.x
api = 2
projects[drupal][version] = 7.26
projects [my_module_name][type] = "module"
projects [my_module_name][download][type] = "git"
projects [my_module_name][download][url] = "git#gitlab.com:my_repo.git"
projects [my_module_name][download][branch] = "master"
Go to aegir gui and create an new platform and wait for verification (otherwise you can ssh as aegir to the server and test it with drush make url.make folder)
Warning! This workflow isn’t the most secure! Just experiment with it and configure proper your server!
Info: This workflow also works for your local dev machine (linux, mac, cygwin) to play with private repositories on GitHub and GitLab

sync file and database drupal

i have 3 servers and install centos 5.5 ,drupal
now i want all server sync file and database
thank you
If you want to have this fully automatic:
Declare on server as master/source server. Any changes on the client machines are overwritten.
Use crontab to start the synchronization repeatedly on the client machines and to start drupal cron on the master machine.
Install ssh and install key files without pass phrase to get secure, reliable and unatended communication between the servers.
Use the backup and migrate module to get a MySql backup triggered by cron.
Do the file synchronization with rsync and keep an eye on file permissions to make sure files are accessible by the apache user on the destination servers.
Import the result of the backup and migrate backup into the client servers db.
A completely different approach would be to use the views module and create RSS Feeds of your nodes. The other servers could read and view them or update their data.
Again a different case: If you want to setup your 3 servers for load balancing / fail over purposes choose a distributed file system and a mirror setup for your db. This way the systems look like one big logical machine with the advantage that single physical machines can crash without crashing the whole system.

Best way to install web applications (e.g. Jira) on Unixes?

Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.

Resources