I have created a chroot using
sudo sh ~/Downloads/crouton -r precise -t unity
I did some config in the chroot and ran a -u update.
Then I moved it to a flash drive with
sudo edit-chroot -m ~/media/removable/MYFLASHDRIVE precise
where I can run it with -c /media/removable/MYFLASHDRIVE as per this issue
I now wish to add the keyboard target with
sudo sh -e ~/Downloads/crouton -n raring -t keyboard -u
but there is no option to modify the path (like -c for edit-chroot), and the issue above indicated there is no way to modify crouton's default chroot directory.
How can further targets be added to the chroot without moving it back off the usb drive?
I was able to make it work for me by symlinking to a directory on my external drive, and then running the commands as normal.
Back up your chroots, this worked for me, but I can't guarantee that it will work for you, or that it won't somehow delete your stuff.
1. Label your drive "external." Using a separate Ubuntu box is the easiest way. Install gparted through apt-get or the software store. Run that, make sure your external drive is selected in the top right hand drop down, right click your drive's partition and select "Label". Type "external", click ok, click apply.
2. Create a folder on your drive called "chroots". Move your chroot's folder into it.
3. Set up sym-link on your chromebook. Open a new chronos shell on your chromebook. Run these commands:
cd /mnt/stateful_partition/crouton
sudo mv chroots chroots.old
sudo ln -s /media/removable/external/chroots ./chroots
4. Run crouton commands as normal. You shouldn't need to specify -c on any of your crouton commands, you can just run them as if the chroot was installed locally.
Related
I prepared an ARM template, template creates listed azure resources: linux VM deployment, Storage deployment, file share in this Storage Account.
ARM works fine, but I would like to add one thing, mounting file share to a linux VM (using script from file share blade, script proposed by Microsoft).
I would like to use Custom Script Extension, and then use "commandToExecute" option to paste inline linux script (this one for file share mounting).
My question is: how to retrieve password to file share and then pass it as a parameter to the inline script. Is it possible? Is it possible to paste file share mounting script as an inline script in ARM template? maybe there is any other way to complete my task? I know that I can store script in a storage account and in ARM template put "blob SAS URL" in the Custom Extension ARM area, but still is a question how to retrieve the password to File Shares, below is the script for File share mount.
sudo mkdir /mnt/wsustorageaccount
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/StorageAccountName.cred" ]; then
sudo bash -c 'echo "username=xxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
sudo bash -c 'echo "password=xxxxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
fi
sudo chmod 600 /etc/smbcredentials/StorageAccountName.cred
sudo bash -c 'echo "//StorageAccount.file.core.windows.net/test /mnt/StorageAccount cifs nofail,vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //StorageAccountName.file.core.windows.net/test /mnt/StorageAccountName -o vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino
You can use this quickstart example:
listKeys(variables('storageAccountId'), '2019-04-01').keys[0].value
This Q&A is a response to this comment. The answer to the question in the comment is not trivial, is too big for a comment, and not suitable as an answer to the question in that thread (answering my own question is officially encouraged). If you have a better answer please post it!
The question is: How to install R on Solaris on a VirtualBox virtual machine?
A more up-to-date version is available from csw: r_base. To install, see the example in Getting started where you replace vim with r_base:
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -a r_base
/opt/csw/bin/pkgutil -y -i r_base
To install a development environment, you might also want:
/opt/csw/bin/pkgutil -y -i gcc4g++
/opt/csw/bin/pkgutil -y -i texlive
Start by downloading and installing Oracle VM VirtualBox.
Then download and unzip the Oracle Solaris 11.1 VirtualBox Template. After you unzip the Oracle template you should see a file called OracleSolaris11_1.ova, that's what you'll open in VirtualBox.
Start VirtualBox, click on File, then Import Appliance, then navigate to chose the ova file you just extracted. It will take some time to import.
Start the Solaris virtual machine by clicking on the start button on VirtualBox. It will take some time to start up and you'll be prompted to add a root password, user name and user password. You'll then use those details to log in, wait for the system to load, choose gnome to ensure you get a desktop environment, and choose your time zone, keyboard layout and language (mine seems to highlight Chinese as the default choice, so be careful not to click through that one too quickly).
Eventually you'll get a desktop, right-click on the desktop and click open terminal, then in the terminal type (or paste):
sudo wget https://oss.oracle.com/ORD/ord-3.0.1-sol10-x86-64-sunstudio12u3.tar.gz && sudo wget https://oss.oracle.com/ORD/ord-3.0.1-supporting-sol10-x86-64-sunstudio12u3.tar.gz
That will connect to the internet and download two files you need. The next line will unpack those two archives:
sudo tar -xzvf ord-3.0.1-sol10-x86-64-sunstudio12u3.tar.gz && sudo tar -xzvf ord-3.0.1-supporting-sol10-x86-64-sunstudio12u3.tar.gz
And then this next line installs R, watch for the prompts after you run the line:
sudo bash install.sh
A lot will flash by in the terminal, concluding with Installation of <ORD> was successful
Now the next bit is where I deviate from the instructions here because I didn't understand them. You'll move all files beginning with lib from the archives that you unpacked into another directory where they are needed by R:
sudo mv lib* /usr/lib/64/R/lib/
That will return nothing in the terminal. Then we can run R simply by typing in the terminal like so
R
And now you should have a regular R session running in the terminal.
I am trying to install Meteor on the HP14 Chromebook. It is a linx x86_64 chrome os system.
Each time I try to install it I run into errors.
The first time I tried to install it the installer just downloaded the Meteor preengine but never downloaded the tarball or installed the actual meteor application structure.
So, I decided to try as sudo.
sudo curl https://install.meteor.com | /bin/sh
This definitely installed it because you can see it when ls
chronos#localhost ~/projects $ chronos#localhost ~/projects $ ls /home/chronos/user/.meteor/
bash: chronos#localhost: command not found
Now when I try to run meteor --version or meteor create myapp without sudo I get the following error.
````
chronos#localhost ~/projects $ meteor create myapp
'/home/chronos/user/.meteor' exists, but '/home/chronos/user/.meteor/meteor' is not executable.
Remove it and try again.
````
When I try to run sudo meteor --version or sudo meteor create myapp I get this error.
chronos#localhost ~/projects $ sudo meteor create myapp
mkdir: cannot create directory ‘/root/.meteor-install-tmp’: Read-only file system
Any ideas? Thinking I have to make that partition writeable. I made partition 4 writeable.
Put your chrome book into dev mode.
http://www.chromium.org/chromium-os/developer-information-for-chrome-os-devices
Boot into dev mode.
ctrl-alt t to crosh
shell
sudo su -
cd /usr/share/vboot/bin/
./make_dev_ssd.sh --remove_rootfs_verification --partitions 4
reboot
After rebooting
sudo su -
mount -o remount,rw /
mount -o remount,exec /mnt/stateful_partition
Write yourself a read/write script
sudo vim /sbin/rw
#!/bin/bash
echo "Making FS Read/Write"
sudo mount -o remount,rw /
sudo mount -o remount,exec /mnt/stateful_partition
sudo mount -i -o remount,exec /home/chronos/user
echo "You should now have full Read/Write access"
exit
Change permissions on script
sudo chmod a+x /sbin/rw
Run to set read/write root
sudo rw
Install Meteor as indicated on www.meteor.com via curl and meteor create works!
Alternatively you can edit the chomeos_startup though that might not be the best idea. It is probably best to have read/write on demand as illustrated above.
cd /sbin sudo
sudo vim chromeos_startup
Go to lines 51 and 58 and remove the noexec options from the mount command.
Down at the bottom of the script, above the note about ureadahead and below the if statement, add in:
mount -o remount,exec /mnt/stateful_partition
#uncomment this to mount root r/w on boot
mount -o remount,rw /
Again, editing chromeos_startup probably isn't the best idea unless you are so lazy you can't type sudo rw.
Enjoy.
This is super easy to fix!!
Just run this (or put it in .bashrc or .zshrc to make it permanent):
sudo mount -i -o remount,exec /home/chronos/user
Based on your question (you are using sudo) I assume you already have Dev Mode enabled, which is required for the above sudo command to work.
ChromeOS mounts the home folder using the noexec option by default, and this command remounts it with exec instead. And boom, Meteor will work just fine after that (and so will a bunch of other programs running out of your home folder).
Original tip: https://github.com/dnschneid/crouton/issues/928
I am starting to use Docker, and now I would like to use it for running Alfresco instances.
I tried to install Alfresco using the single file installer obtained from:
http://wiki.alfresco.com/wiki/Community_file_list_4.2.e (572Mb)
After inserting the installer file and run the recently created image, I execute:
root#3e8b72d208e4:/root# chmod u+x alfresco.sh
root#3e8b72d208e4:/root# mv alfresco.sh alfresco.bin
root#3e8b72d208e4:/root# ./alfresco.bin
root#3e8b72d208e4:/root#
After 1 second, the ./alfresco.bin process ends with no output. It is supposed to prompt some installer options.
I'm running Docker on Ubuntu 13.10 64 bits with 8Gb in RAM. What would be the right procedure to install Alfresco on a Docker container using the installer?
The problem is that the bitrock installer requires tmpfs which in turn requires extra privileges for the executing the container. Run your container with
docker run -i -t -privileged <image> [<command>]
and execute
mount none /tmp -t tmpfs
within the container.
After that, the installer will run just fine.
Unfortately, things get messy if what you want is building an image from a dockerfile. docker build does not provide the -privileged switch or a RUNP instruction. You might want to have a look at https://github.com/dotcloud/docker/issues/1916 for further discussion.
For getting the docker-compose.yml of the community version. Here's one command that helped me generate the files needed.
docker run -it --rm -v "$PWD:/app" -w "/app" -e XDG_CONFIG_HOME=/app/.yo_config -e npm_config_cache=/app/.cache node:alpine sh -c "npm i -g yo generator-alfresco-docker-installer && yo alfresco-docker-installer"
I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.