I am a newbie to AWS and Bitnami LAMP. I loaded a Bitnami LAMP ec2 instance and I need to ready it for HIPAA. My question more specifically is how do I encrypt the data on the stack?
Is it already encrypted?
Does the LAMP setup put the database on a separate EBS Volume that I can encrypt?
If so do I encrypt the EBS Volume?
I am using version 5.4.17-0 on Ubuntu 12.04.
So I needed to attach an EBS Volume, encrypt it, mount it, and move my MySQL database to it. The links that are most helpful are:
Encrypting a volume: http://www.cromwell-intl.com/security/ec2-secure-storage.html
Copy Database to volume: http://dunniganp.wordpress.com/2012/11/28/moving-a-mysql-database-from-one-ebs-volume-to-another-on-aws-ec2/
Related
I try to install Wordpress on the Swisscom CloudFoundry application cloud. To install it I need SSH with private and public key pairs (not cf ssh).
I follow the steps here:
https://github.com/cloudfoundry-samples/cf-ex-wordpress
Is this possible? What are the correct values for:
SSH_HOST: user#my-ssh-server.name
SSH_PATH: /home/sshfs/remote
Is this possible?
It depends on your CF provider. This method of running Wordpress requires that you use a FUSE filesystem (SSHFS) to mount the remote files system over the wp-content directory of your Wordpress install. In recent versions of CF (I can't remember exactly where this changed) you are no longer allowed to use FUSE based file systems.
Before you spend a lot of time on this, you might want to validate that your provider still allows FUSE. You can validate with a simple test.
Push any test app to your provider.
cf ssh into the application container.
Check that the sshfs binary is available.
Try using sshfs to mount a remote filesystem (man page | examples).
If you can successfully mount a remote filesystem via SSH using the steps above then you should still be able to use the method described in that example application.
If you cannot, the next best option is to use a plugin that allows storing your media on a remote system. Most of these are for S3. Search google or the WP plugin repo, they're easy enough to find.
There is a better solution on the horizon called Volume Services. You can read more about this here. I have not seen any public CF providers offering volume services though.
What are the correct values for:
SSH_HOST: user#my-ssh-server.name
This should be the user name and host name of your SSH server. This is a server that exists outside of CF. Examples: my-user#192.0.2.10 or some-user#host.example.com. You should be able to ssh <this-value> and connect without entering a password. This is so that the volume can automatically be mounted without user interaction when your app starts.
SSH_PATH: /home/sshfs/remote
This is the full path on the remote server where you'd like to store the Wordpress files. In other words, this directory will be mounted as the wp-content directory of your app.
So, I have Bitnami Wordpress set up through a m1.small EC2 instance. About every hour, the site suddenly has a problem connecting to the database. The only way I can get it to work is by rebooting the instance.
Has anyone encountered this problem before or possibly have ideas for a fix?
Many thanks!
(Also, if you need me to provide any extra info I'd be glad to do so)
Yes, we've had this problem before with the bitnami AMI / MySQL just this week. It's normally because the MySQL server daemon dies on the EC2 instance.
To solve, we set up the MySQL database on RDS and connected Wordpress to that instead. The database will perform better on RDS and you won't have to worry about the daemon dying. If RDS is not an option then you'll have to dig into the MySQL / wordpress logs to find out what's going wrong with MySQL.
Not a programming question but definitely related to big data analysis.
I have setup R-Studio Server on an Ubuntu EC2 instance for the first time and successfully started r-studio server in my browser. I also have putty ssh client.
I had a file in an s3 bucket. I passed this command to bring it from s3 to my ebs volume:
s3cmd get s3://data-analysis/input-data/filename.csv . I assume this command downloads the file from s3 into the ebs volume.
1) How do I set path in r-studio server to my mounted EBS volume
2) Why do I not see the contents of my EBS volume in the r-studio files area (bottom right side? ) .
I also tried to list the contents of my volume in the ssh using this:
$ ls /dev/xvdal
I have scoured the internet looking for help on this but not found the nuts and bolts detail for this problem anywhere. Please help!
i have 3 servers and install centos 5.5 ,drupal
now i want all server sync file and database
thank you
If you want to have this fully automatic:
Declare on server as master/source server. Any changes on the client machines are overwritten.
Use crontab to start the synchronization repeatedly on the client machines and to start drupal cron on the master machine.
Install ssh and install key files without pass phrase to get secure, reliable and unatended communication between the servers.
Use the backup and migrate module to get a MySql backup triggered by cron.
Do the file synchronization with rsync and keep an eye on file permissions to make sure files are accessible by the apache user on the destination servers.
Import the result of the backup and migrate backup into the client servers db.
A completely different approach would be to use the views module and create RSS Feeds of your nodes. The other servers could read and view them or update their data.
Again a different case: If you want to setup your 3 servers for load balancing / fail over purposes choose a distributed file system and a mirror setup for your db. This way the systems look like one big logical machine with the advantage that single physical machines can crash without crashing the whole system.
Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.