I'm currently using several LXC containers to isolate other users' programs from the host system. The rootfs of each container uses a union file system and a few mount-binds from the host and everything is working correctly.
Now I want to be able to use S3FS to mount an S3 bucket on the host and mount-bind it into a directory on the rootfs of each container. It needs to be mounted by the host because I don't want any of the users in the container to see the aws secret key. This appears to work fine (I am able to see the files in the s3fs mount in the correct place), but once I am inside the container, the s3fs files are no longer visible.
It is visible if it is mounted from inside the container, but again I need it mounted from the host.
Is there a particular setting that I need to configure to get this to work properly, or is it just not possible due to FUSE?
You need to bind mount before starting the container. The problem you're currently experiencing is caused by bind mounting after the container has already started. Alternatively, just restart the container.
Related
I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.
I am evaluating application sample with WebSphere8 running in Docker.
https://hub.docker.com/r/ibmcom/websphere-traditional/
Along with this tech note, I want to change some server settings.
http://www-01.ibm.com/support/docview.wss?uid=swg21614221
But as you know, docker container throw away any changes in container storage once the application server is restarted.
So, my question is how can I preserve these server settings change beyond server restart, such as using docker volume.
If you want to make changes via the UI (admin console), then consider using docker commit to save your modified image and then spawn containers from that new image. https://docs.docker.com/engine/reference/commandline/commit/
You want to customize the image. The simplest way is to have your own Dockerfile with FROM ibmcom/websphere-traditional:latest then RUN whatever wsadmin commands (with -conntype NONE) you need to perform customization, including your application deployment.
I'm getting started with running Docker on MacOS and I just was able to install a WordPress container and get that running locally.
But where the heck are the actual WordPress files?
Do I need to SSH into the container so I can view/edit them there? If so, how would one go about that?
Wordpress files are kept inside the container, for example you can find wp-content at:
/var/www/html/wp-content
But, to get "inside" your running container you will have to do something like docker container exec -it <your_container_name> bash. More here: How to get into a docker container?
Containers are considered ephemeral, which means that a good practice is to work in a way that lets you easily stop/remove a container and spin up a new one without losing your stuff. To persist your data you have the option to use volumes.
You might also want to take a look at this, which worked for me: Volume mount when setting up Wordpress with docker. If your case is to develop wordpress on docker containers, then... it's a different case.
If you have not set a binding when running the docker image for the first time you can still do the following.
docker volume ls
will list all of your volumes used by your local docker.
What you can do is the following :
docker volume inspect "VOLUME NAME"
e.g. docker volume inspect "181f5c9916a29e9f654317988f49237ea9067157bc94041176ab6ae5f9a57954"
you will find the Mountpoint of each docker volume. There could more than 1, every one of those will have a mount point.
How would you restart a service, say for example 'nginx' when a config file changes? For example I've got Puppet creating some nginx cfg files and place them on a volume which is mounted to my nginx container. At the moment I am using docker-gen, but are there any other methods?
Docker containers are meant to be ephemeral. Also, Docker containers "containerize" whatever process you are running by making that process PID 1 inside your container. That means there is no traditional init system. In fact, no init system at all. And as you know, when the process inside your container exits, the container dies. So if we approach the problem from the standpoint of implementing ephemeral containers, you don't restart your service. You create a new container using your modified configuration. And as mentioned in the comments by thaJeztah, you can docker restart nginx your container to refresh the configuration.
Now, there are a couple of ways to hammer this square peg into a round hole. You are better than that... However, You've already noticed that docker-gen will get you nearly there. Likewise, if you take a dive into how the jwilder/nginx-proxy image works, you'll get a better idea of how docker-gen works in practice. But you've probably already seen that, since you're already using docker-gen.
The other option is to shoehorn in something like supervisord. There is plenty of information about doing that online. Tons of people have done this in the past. And so for other people that may not understand why that solves the problem, supervisord becomes your container's PID 1, and allows you to restart the child nginx processes "like normal", but without killing your container.
I'm newbie for these techs (open stack / docker / vagrant), not sure if I understood them correctly (most likely did not), for me I understood it is something like having a portable application to run it with same development configuration to ensure all the development team have same setup, but did not understand, what after development, and how to get benefit from them with dart app.
my question is:
1. Correct my understanding
2. Do I need the end user to have these things installed in his system, and run my application through them, same as in the development stage?
3. How can I build/develop/distribute dart lang app through them, may be as hese as well as dart are new, I could not find enough info while googling.
thanks
Docker is similar to a virtual machine like VM-Ware or Virtualbox as it creates an abstraction layer between the host operating system and the operating system running within a Docker container. The difference is that Docker doesn't emulate the entire hardware. The disadvantage is that Docker only runs on Linux and only Linux can be run inside Docker. If your host is an Intel system you can't run an ARM Linux inside the container. (theoretically you can run Virtualbox inside Docker and run Windows. or other OSes in it)
With Docker you can test your application locally in the same environment as the application will run when deployed.
When you for example create an application you want to run in Google Compute Engine you install and test it locally inside a Docker container and then deploy the Docker container to Google Compute Engine as a whole unit. When there is a bug in the deployed application you should be able to reproduce it locally as well because it's just a 1:1 copy. No bug could have been introduce because the operating system or other dependencies were installed differently on the deployment environment than in the develeopment/test environment.
The Dockerfile is a set of instructions to set up a Docker container. If you want to create a new Docker container (for example for a new developer) you just let Docker process the Dockerfile and a new Docker container is created from it. This allows to easily create new Containers.
If you want to update one dependency to a newer version or want to add remove components to/from the environment you change the Dockerfile and create a new container from it. This way you avoid that manual addition/removal form/to an existing container manually lets containers of different developers/testers/deployment diverge from each other.
I haven't used OpenStack myself but from the web page it seems to provide components and tools to build and manage your own cloud infrastructure.
I also haven't used Vagrant myself but it seems to help to automate a lot of tasks related to creating and managing virtual machines like VM-Ware, Virtualbox, Docker and probably others.
When you have for example a server application it probably consist of a number of components you don't want all to run in one container but split up into several containers. One container for the Database, one for the web server, one for the backend application (created in Dart for example), and others. It can become cumbersome to manage all those containers. Vagrant helps to automate related tasks.