Storing datasets and intermediary files in Amazon EC2 [closed] - r

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'd like to use Amazon EC2 to work with large datasets in R.
I have launched an instance, installed R, and created an EBS image of the volume "root" in a drive of 300 Go, unchecking "Delete on Termination".
I then started this AMI in a new instance, uploaded some datasets to it, and terminated the instance.
When I launched this AMI later on a new instance, the hard drive was in the same state than when I first created the AMI - but I expected the uploaded datasets to be available. Is it expected behavior? If yes, what's the best way to store datasets and intermediate files between two connexions to Amazon EC2?

Perhaps you could use S3 as a filesystem.
Create an S3 bucket on AWS. In this example we're using the AWS command line utilities running locally:
aws s3 mb s3://bucketxyz
Then launch an EC2 instance. This example worked for Amazon Linux. ssh into the box, setup s3fs:
sudo yum install git gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel openssl-devel mailcap automake
git clone git://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr
make
sudo make install
... and then setup your AWS credentials:
echo '[AWS Access Key ID]:[AWS Secret Access Key]' | sudo tee /etc/passwd-s3fs
sudo chmod 400 /etc/passwd-s3fs
Then mount the drive as a folder:
sudo mkdir /bucketxyz
sudo s3fs bucketxyz /bucketxyz
This folder is now accessible like any other folder, but resides in S3 and is therefore persistent and could be accessed from other instances if necessary.

Related

Get access to the live preview logs of wordpress theme deployed by bitnami on a GCP VM

I wanted to deploy this wordpress theme to a GCP VM, using bitnami and the free tier, following this tutorial. I am able to deploy a the default themes but, unfortunately, when I upload my own and click Live preview I get the following message:
There has been a critical error on this website. Please check your
site admin email inbox for instructions.
Learn more about troubleshooting WordPress.
The first thing I thought when I searched the Internet was that my machine did not have enough RAM memory as explained in this forum post:
So I tried to open the logs with SSH on the GCP virtual machine but the VM seems to be empty.
officialdataguild#cloudshell:~ (the-data-guild-website)$ gcloud compute ssh wordpress-website-vm --project=the-data-guild-website
Did you mean zone [europe-west1-c] for instance: [wordpress-website-vm] (Y/n)? n
No zone specified. Using zone [us-east1-b] for instance: [wordpress-website-vm].
Updating project ssh metadata...working.Updated [https://www.googleapis.com/compute/v1/projects/the-data-guild-website].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.2745993064748503787' (ECDSA) to the list of known hosts.
Linux wordpress-website-vm 4.19.0-20-cloud-amd64 #1 SMP Debian 4.19.235-1 (2022-03-17) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
officialdataguild#wordpress-website-vm:~$ ls
officialdataguild#wordpress-website-vm:~$
Following this tutorial on viewing and examining logs I looked for the right folder but wasn't able to find it:
officialdataguild#wordpress-website-vm:~$ cd ~/logs
-bash: cd: /home/officialdataguild/logs: No such file or directory
So, following this guide to troubleshoot preblems with wordpress deployed with bitnami I went a folder down and did:
officialdataguild#wordpress-website-vm:~$ cd ..
officialdataguild#wordpress-website-vm:/home$ ls
bitnami officialdataguild
officialdataguild#wordpress-website-vm:/home$ cd bitnami/
officialdataguild#wordpress-website-vm:/home/bitnami$ ls
bitnami_credentials htdocs stack
officialdataguild#wordpress-website-vm:/home/bitnami$ test ! -f "/opt/bitnami/common/bin/openssl" && echo "Approach A: Using system packages." || echo "Approach B: Self-contained installation."
Approach A: Using system packages.
officialdataguild#wordpress-website-vm:/home/bitnami$ sudo /opt/bitnami/ctlscript.sh status
apache already running
mariadb already running
php-fpm already running
But I still can't find the theme live preview logs ...

mariaDB install in Ubuntu 16.04 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
i was install mariaDB 10.1 in Ubuntu16.04LTS
i use this code.
sudo apt-get install mariadb-server
install is success, but when i connect to mariadb, it is don't need password!
i was try this. but these all doesn't work.
i use mysql_secure_installation command and finish configuration successfully. but it is still don't need password.
SET PASSWORD FOR 'root'#'%' = PASSWORD('newpass'); and flush privilege
update user set password=password('newpass') where user='root';
i want use password. what should i do?
My answer to "i want use password. what should i do?" is:
"no, you probably don't!" (depending on your app's deployment - see comments below)
With Ubuntu 15.10 and later MariaDB installs with the unix_socket user authentication plugin enabled by default: https://mariadb.com/kb/en/mariadb/unix_socket-authentication-plugin/
With a fresh install login with sudo mysql -uroot and execute:
SELECT User, Password, Host, plugin from mysql.user WHERE user = 'root';
You should see a single user. This also means you cannot remotely login as root with the default install (one of the tasks the mysql_secure_installation performs).
+------+----------+-----------+-------------+
| User | Password | Host | plugin |
+------+----------+-----------+-------------+
| root | | localhost | unix_socket |
+------+----------+-----------+-------------+
I would recommend leaving this as-is and creating a non-root user with the appropriate permissions for your application to use.
However, if you absolutely need a password-authenticated root user you need to change the plugin to mysql_native_password.
UPDATE mysql.user SET plugin = 'mysql_native_password', Password = PASSWORD('secret') WHERE User = 'root';
FLUSH PRIVILEGES;
Now quit then log-in with mysql -uroot -p.
Note if you need to restart the service you will need to do three commands instead of just sudo systemctl restart mysql. I am pretty sure there is some addition to a .cnf file you can make so that restart will work as expected but I couldn't easily figure it out. Basically the lesson is use unix_socket.
sudo systemctl stop mysql
sudo kill -9 $(pgrep mysql)
sudo systemctl start mysql

Can I install puppet to all my clients(hosts) at once?

I have installed puppet in the master and one of the clients. Now I want to install it in all the 100 servers I have and sign the certificate. I know I can sign the certificates to all at once, but is there a way to install puppet in all the hosts at once?
Several ways:
bake the image
Bake the image with puppet agent installed for these 100 servers.
For example, add shell command yum install -y puppet facter hiera when bake the centos image
refer:
packer.io
packer-template
So if you prepared the image, export to vsphere or generate aws ami image, any instance start with this image will have puppet installed already.
Using automation tools
If these clients are already created and running. Use ansible or any other automation tool to install puppet directly
If you don't want to create image, you can launch bash "post-script" that will be executed just after the start of each instances. See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
Example of AWS CLI call to launch one instance :
ec2-run-instances --key KEYPAIR --user-data-file install.sh <ami_version>
and with this in the install.sh file :
yum install -y puppet facter hiera

How to deploy Meteor and Phusion Docker to Digital Ocean with Docker?

What is a workflow for deploying to Digital Ocean with Phusion Docker and Node/Meteor support?
I tried :
FROM phusion/passenger-nodejs:0.9.10
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# ssh
ADD private/keys/akey.pub /tmp/your_key
RUN cat /tmp/your_key >> /root/.ssh/authorized_keys && rm -f /tmp/your_key
## Download shit
RUN apt-get update
RUN apt-get install -qq -y python-software-properties software-properties-common curl git build-essential
RUN npm install fibers#1.0.1
# install meteor
RUN curl https://install.meteor.com | /bin/sh
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Enable nginx
# RUN rm -f /etc/service/nginx/down
#setup app
RUN mkdir /home/app/someapp
ADD . /home/app/someapp
WORKDIR /home/app/someapp
EXPOSE 4000
CMD passenger start -p 4000
But nothing is working and then I'm not sure how to really manage update/deploy/running?
E.g, how would you also handle updating the app without rebuilding the docker image?
Here is my suggested workflow:
Create an account on Docker Hub, you can get 1 private repository for free. If you want a complete private repository hosted on your own server, you can run an entire docker registry and use it to host your images.
Create your image on your development machine (locally or on a server), then push the image to the repository using docker push
Update the image when needed and commit your changes with docker commit then push the updated image to your repository (you should properly version and tag all your images)
You can start a digital ocean droplet with docker pre-installed (from applications tab) and simply pull your image and run your container. Whenever you update and push your image from your development machine, simple pull it again from the droplet.
For large and complex infrastructure, I would recommend looking into Ansible to configure your docker containers and manage digital ocean droplet as well.
Be aware that your data will be lost if you stop the container, so consider defining a volume in your container that is mapped to a shared folder on your host machine
I suggest you test your Dockerfile in a local VirtualBox VM. I wrote a tutorial about deploying node.js app with Docker. I build several images (layers) instead of just 1. When you update your app, you just need to rebuild the top layer. Hope it helps. http://vinceyuan.blogspot.com/2015/05/deploying-web-app-redis-postgres-and.html

How to install R on Solaris on a VirtualBox virtual machine?

This Q&A is a response to this comment. The answer to the question in the comment is not trivial, is too big for a comment, and not suitable as an answer to the question in that thread (answering my own question is officially encouraged). If you have a better answer please post it!
The question is: How to install R on Solaris on a VirtualBox virtual machine?
A more up-to-date version is available from csw: r_base. To install, see the example in Getting started where you replace vim with r_base:
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -a r_base
/opt/csw/bin/pkgutil -y -i r_base
To install a development environment, you might also want:
/opt/csw/bin/pkgutil -y -i gcc4g++
/opt/csw/bin/pkgutil -y -i texlive
Start by downloading and installing Oracle VM VirtualBox.
Then download and unzip the Oracle Solaris 11.1 VirtualBox Template. After you unzip the Oracle template you should see a file called OracleSolaris11_1.ova, that's what you'll open in VirtualBox.
Start VirtualBox, click on File, then Import Appliance, then navigate to chose the ova file you just extracted. It will take some time to import.
Start the Solaris virtual machine by clicking on the start button on VirtualBox. It will take some time to start up and you'll be prompted to add a root password, user name and user password. You'll then use those details to log in, wait for the system to load, choose gnome to ensure you get a desktop environment, and choose your time zone, keyboard layout and language (mine seems to highlight Chinese as the default choice, so be careful not to click through that one too quickly).
Eventually you'll get a desktop, right-click on the desktop and click open terminal, then in the terminal type (or paste):
sudo wget https://oss.oracle.com/ORD/ord-3.0.1-sol10-x86-64-sunstudio12u3.tar.gz && sudo wget https://oss.oracle.com/ORD/ord-3.0.1-supporting-sol10-x86-64-sunstudio12u3.tar.gz
That will connect to the internet and download two files you need. The next line will unpack those two archives:
sudo tar -xzvf ord-3.0.1-sol10-x86-64-sunstudio12u3.tar.gz && sudo tar -xzvf ord-3.0.1-supporting-sol10-x86-64-sunstudio12u3.tar.gz
And then this next line installs R, watch for the prompts after you run the line:
sudo bash install.sh
A lot will flash by in the terminal, concluding with Installation of <ORD> was successful
Now the next bit is where I deviate from the instructions here because I didn't understand them. You'll move all files beginning with lib from the archives that you unpacked into another directory where they are needed by R:
sudo mv lib* /usr/lib/64/R/lib/
That will return nothing in the terminal. Then we can run R simply by typing in the terminal like so
R
And now you should have a regular R session running in the terminal.

Resources