We are trying to create a Docker container which will host and run our webapp (mainly written in PHP with Symfony2).
For the moment, the container embeds all the application code, cloned when building the image (through a Dockerfile). The app runs correctly, on OSX, through Vagrant (Precise64 base image).
We are now struggling to share the container embedded code with the host (Vagrant -> OSX) for development purpose (edit a file on the host OSX should affect the container code).
Seems that there is no way to share this folder from container to the host.
Sharing a folder from host to container (-v option of the run command) overwrites the original container folder.
A soft link does not work as well since the hosts (Vagrant and OSX) could not read the original location.
I'm sure that the solution is with the Docker's volumes (http://docs.docker.io/en/latest/use/working_with_volumes/) but we have not figured out yet how to make it works.
Do you have feedback / experience on it ?
You can share your file in OSX to container in the following line:
OSX dir(host) -shared fold-> /vagrant(vagrant) -volume-> container dir(container)
but the the file is saved in you host not container.
If you want save the file in container and share it to your OSX host, all your container file is in a aufs dir in /var/lib/docker/aufs/mnt/{container id}, your can share this fold to you OSX by the feather supported by vagrant or others:
container dir(container) -aufs-> /var/lib/docker/aufs/mnt/{id}(vagrant) -some-> OSX dir(host)
Related
I have been working on an ASP.NET application using Docker, and when I launch it through Visual Studio it works great! However, if I try to run anything from the command line (or powershell, or VS's CLI/Powershell) it will run, but the container it generates refuses all connections.
I am on Windows 10 NT with Docker Desktop installed trying to run an ubuntu:18.04 image (i've tried Alpine, ubuntu:16.04 as well).
Steps to reproduce:
-Create a default ASP.NET application in Visual Studio
-Add Docker Support
-Run with 'Docker' selected
-Open browser, navigate to localhost:[YourPort]
-Success! Works as intended.
Then, either using the same image or a downloaded one (I tried dockersamples/static-site to confirm it wasn't a problem with the specific project):
-Open CMD
-Run docker run -p [HostPort]:[ContainerPort] [SameImageVSUses:tag] on a different port
-See that docker ps shows both containers running next to each other
-Open browser (Firefox), get error
The connection was reset
Update
I changed the ASP.NET app's program class to use 0.0.0.0 instead of localhost, I believe this was necessary but now I see
Secure Connection Failed
PR_END_OF_FILE_ERROR_
If I curl localhost:[MyPort], I get (52) empty reply from server
/Update
Well, maybe Visual Studio does more that I'm not aware of.
A little bit of digging shows yes, it throws in a ton of extra arguments! Using the copy/pasted command Visual Studio does gives me... the exact same error.
To clarify, the containers still run from the command line, I can ssh in or docker inspect them (in fact, the VS-started and CMD-started containers' docker inspect is identical other than network addresses it's bound to). I get no error messages at all from the process of building and starting the container, so if some part of it is failing it is doing so silently.
I'm relatively new to Docker but I can't seem to find a fix for this, or even a reason behind it. What is Visual Studio doing that I'm not? I've tried everything I'm aware of, I even had to wipe my machine (unrelated) and the exact same thing happened when I got everything reinstalled. My gut tells me it's something on my machine, but then the VS-launched one should fail too, right?
I can't find anything that tells me to flip a magic switch if I'm running CLI stuff, and nothing I do to the dockerfile or command arguments seems to work. I've never used VirtualBox or Docker Toolbox, this shouldn't be a wonky configuration screwed up by an old program because It works fine when launched from Visual Studio! Agh!
I hope that this is indeed a magic switch I haven't flipped, otherwise there is something very basic that I don't understand about what I'm working with.
If you are trying to run recent VS template you just need to follow this instruction:
Go to the Api project directory:
cd ./src/YourApiDirectory
Build Command:
docker build -f ./Dockerfile --force-rm -t yourapiimage:dev ..
Run Command:
docker run -it --rm -e "ASPNETCORE_ENVIRONMENT=Development" -p 58817:80 --name yourapiname yourapiimage:dev
please note that "-it" flag in last command will run your image in "interactive" mode. Also please note I am using only http connection via port 58817.
Thank you for the suggestions, it ended up being something rather frustrating. I think that it was a combination of two problems:
This stuff could be causing problems for others but I was mistaken, this did not work for me
First and foremost, no amount of docker configuration tells your website to listen for anything inside the container. I believe the website wasn't listening for anything when I initially tried most fixes.
The real problem was that the launchSettings.json in the .csproj Properties folder apparently overrides arguments from the command line!
Remember how I said '...run it alongside the first...'? That means I was never running the website on the correct set of ports. Apparently, -p 8001:443 -e ASPNETCORE_HTTPS_PORT=443 is not enough to make the site listen on 443. You must also set the sslPort in the launchSettings.json. Such is life, I suppose.
This is what finally worked
I ran docker-compose up in the solution directory. That's it. I didn't see a docker-compose.yml when I was looking in VS so I didn't think about it, but that's only because VS doesn't show solution-level items. I guess the thing that VS was doing that I wasn't was running docker-compose instead of individual commands.
When directly launch with Docker profile which is done via docker-compose file in Visual Studio, visual studio behind the screen merges different override files and does different tasks and one of them is attaching remote debugger in the container etc.
To help you I've created a sample asp.net core api via Visual Studio 2019 selecting .Net Core 3.0.
The following is the docker-compose that VS2019 generated on my machine when I launched my API via VS2019.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" -f "C:\Users\myuser\source\repos\testwebcore\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose14364360289538262671 --no-ansi up -d --build --force-recreate --remove-orphans
I can get it work directly on powershell by running the following command, here I am using the same settings used in the override file by default created by VS2019. You have to run this command from parent folder outside the project folder.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" up
If you have directly build and run with the docker file instead of docker-compose
You can build with the following command and like before should run from outside folder of the project file.
docker build -f testwebcore/Dockerfile -t testcore
After building the image, you can run it with the below command but before that you need to create a certificate and pass couple of environment variables to the run command. The details of this is mentioned in the following page.. Especially the section Windows subsystem for Linux. I am running Linux containers on my Windows 10 laptop.
So you have to run the following command to generate certificate
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p testpassword
So the complete run command with environment variables and certificate generated above the command is as follows.
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="testpassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v c:\users\myuser\.aspnet\https:/https/ testcore:latest
I'm trying to run xv6 operating system on VirtualBox or VMWare in a Linux host. The official instructions said how to run the OS on qemu only. However, the official page (https://pdos.csail.mit.edu/6.828/2014/xv6.html) mentioned that xv6 can be booted directly on hardware also, but it's not clear how.
I want to boot xv6 on VirtualBox or VMware first. I extracted the following command from the Makefile, which runs xv6 from the command line after it's compiled using make command.
/usr/bin/qemu-system-i386 -serial mon:stdio -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp 2 -m 512
Please help me how to proceed. If the procedure is already documented some reference will be helpful.
The instructions are here which is linked (via 6.828 tools page) from your link though they are a bit terse:
Using a Virtual Machine
Otherwise, the easiest way to get a compatible toolchain is to install
a modern Linux distribution on your computer. With platform
virtualization, Linux can cohabitate with your normal computing
environment. Installing a Linux virtual machine is a two step process.
First, you download the virtualization platform.
VirtualBox (free for Mac, Linux, Windows) — Download page
VMware Player (free for Linux and Windows, registration required)
VMware Fusion (Downloadable from IS&T for free).
VirtualBox is a little slower and less flexible, but free!
Once the virtualization platform is installed, download a boot disk
image for the Linux distribution of your choice.
Ubuntu Desktop is what we use.
This will download a file named something like
ubuntu-10.04.1-desktop-i386.iso. Start up your virtualization platform
and create a new (32-bit) virtual machine. Use the downloaded Ubuntu
image as a boot disk; the procedure differs among VMs but is pretty
simple. Type objdump -i, as above, to verify that your toolchain is
now set up. You will do your work inside the VM.
I can see how one could read that and not see the answer.
After the virtual machine is installed, download the Ubuntu Desktop .iso. Install that into the VM and fire it up. Presumably the Desktop will provide a clear mechanism for loading your OS. (Wait, I'm giving it a try. Will update with the result.)
Turns out that is simply a Ubuntu client desktop, and isn't anything special for running a sub-operating system.
Looking around some more, I found the commentary to be the best potential clue. It contains this (head scratcher) phrase:
To run xv6, install the QEMU PC simulators. To run in QEMU, run "make qemu".
If only it specified the context to get to that point! (Sorry I am not more help.)
I see that you want to boot it on VirtualBox or VMware, but another option would be to using docker to run xv6. A great guide for getting started with xv6 through docker is here.
The full guide is elaborate and can help you with getting started.
It is an alternative option, but one that can get you going fast hopefully.
It will only take 4 steps to get going with the xv6:
Step 1
Download and set up docker here
Step 2
- Run this command in PowerShell or bash to pull the ubuntu image with xv6 docker pull grantbot/xv6
Step 3
- To run the docker image and get going with xv6 run this command docker run -it grantbot/xv6
Step 4
- Now inside the shell in the ubuntu image run cd /home/a/xv6-public/ to enter the root folder of the xv6.
Done
- Now you can compile and run the xv6 with make qemu-nox
Step 1.Compile xv6
Download the code, unzip it and enter the directory, compile the operating system image and root file system, the command is as follows:
make xv6.img&&make fs.img
Step 2. Write image to disk
Create two disks in a existed vmware virtual machine(my vmware version is 15.2.2, linux version is Centos7.8), the operation steps are: virtual machine settings -> add -> disk -> SCSI -> create a new virtual disk -> size 0.005 (allocate immediately, single file) -> name the disk "os", which means this disk is the operating system.
Create another disk named "fs" in the same way to put the root file system.
At this time, there should be "sdb" and "sdc" in the /dev/ directory (sda is the current operating system itself). If you do not see the "sdb" and "sdc", restart the guest operating system.
Write the operating system and root file system to the disk with the following command:
dd if=./xv6.img of=/dev/sdb bs=4k count=1000
dd if=./fs.img of=/dev/sdc bs=4k count=1000
shutdown the current virtual machine to ensure that the file has sync to the disk. At this time, the two images have been written to the disk, vmware saves the disk as a file, the location is in the directory of the current virtual machine, named os.vmdk, fs.vmdk, the next step will load these two files into the new virtual machine.
Step 3. Create xv6 virtual machine
To create an empty virtual machine, the operation steps are: customize (advanced) -> next -> install the operating system later -> choose other operating system type (choose other versions) -> take the virtual machine name as xv6 (name depend on you) ) -> Then use the default configuration all the way to "Next" to completion.
Right-click the created virtual machine and delete the disk created by default. Add the disk file created in the previous step to the current virtual machine. The operation steps are: add -> "disk" -> ide (note that this is an IDE instead of a SCSI disk, because xv6 reads an IDE format disk) -> use an existing virtual disk -> select the os.vmdk generate in the step 2->complete
Add fs.vmdk in the same way. Note that you must add os.vmdk first. Because os.vmdk is the operating system, it needs to be the first hard disk.
Now, you create a virtual machine which has two disk. one is os disk, another is root file system disk, all is ready.
Start the virtual machine, and the xv6 will start successfully.
I am running Shiny-Server (to run web applications built in R) in a Docker container. I have an application where user can upload some files. It's working, but on server OS I needed to give write and read permissions to the user "shiny". The problem is that everytime I need to do something with the container (like restart, or simply stop and start) I lose the change made on folder's permissions, which come back to default.
I tried to use docker commit and docker run again on container, using the new image, but it did not work. So now I am searching if I can use docker run and docker exec togheter, doing something like this: docker run <docker commands to run shiny-server> exec -it bash <bash commands to change folder permissions>.
Is it possible? Does anyone has a good solution for this case?
Thanks.
Just wondering if there is any method to copy a file to a live VM created over KVM using libvirt tools. My objective is to assign a static IP address to VM without modifying the img file or without using dhcp. What I understand is we need to have a file in /etc/sysconfig/network-scripts/ corresponding to the interface in VM where ip address has to be assigned. Wondering if I can copy this file after VM is created and booted up.
Update : I am using CentOS 7 for guest and host .
Thanks
I'd suggest using a kickstart file for installing the machine. That way the installer automatically sets the IP address wherever it is needed (even though you know where it needs to be set in the current version). Copying the file onto the disk with the VM being running must be done in a way that the VM knows about that, but that means you need to have access to the machine, which, I guess, you don't; mainly since that's probably what you're trying to do.
If the machine is installed and you want to configure that without access to it and without reinstalling, I'd suggest to cleanly shutdown the VM, then use libguestfs (mainly guestfish command) that lets you access the disk of the machine.
This works really well: http://www.linux-kvm.org/page/9p_virtio
Basically mkdir /tmp/share && echo '/hostshare /tmp/share 9p trans=virtio,version=9p2000.L 0 2' >> /etc/fstab. On Host ``mkdir /tmp/share`.
Then in Virt-manager, Add Hardware > Filesystem, change Driver to Path, add Source /tmp/share and Target /hostshare. And mount -a.
Or add by command mount -t 9p -o trans=virtio,version=9p2000.L hostshare /tmp/share.
I am using http://blog.thoward37.me/articles/where-are-docker-images-stored/ to understand the directory structure of the docker.That link suggests that every docker image entry under /var/lib/docker/graph/ has three fields json,layersize and /layer. But in my local system i dont find the /layer directory for any image.why is that so ?.Docker version iam using is 1.3.2
No issues.I found the answer. The link i specified uses docker 0.8 and i am using docker 1.3.2. In the new docker version layers of rootfs are found in /var/lib/docker/aufs/mnt/