Just wondering if there is any method to copy a file to a live VM created over KVM using libvirt tools. My objective is to assign a static IP address to VM without modifying the img file or without using dhcp. What I understand is we need to have a file in /etc/sysconfig/network-scripts/ corresponding to the interface in VM where ip address has to be assigned. Wondering if I can copy this file after VM is created and booted up.
Update : I am using CentOS 7 for guest and host .
Thanks
I'd suggest using a kickstart file for installing the machine. That way the installer automatically sets the IP address wherever it is needed (even though you know where it needs to be set in the current version). Copying the file onto the disk with the VM being running must be done in a way that the VM knows about that, but that means you need to have access to the machine, which, I guess, you don't; mainly since that's probably what you're trying to do.
If the machine is installed and you want to configure that without access to it and without reinstalling, I'd suggest to cleanly shutdown the VM, then use libguestfs (mainly guestfish command) that lets you access the disk of the machine.
This works really well: http://www.linux-kvm.org/page/9p_virtio
Basically mkdir /tmp/share && echo '/hostshare /tmp/share 9p trans=virtio,version=9p2000.L 0 2' >> /etc/fstab. On Host ``mkdir /tmp/share`.
Then in Virt-manager, Add Hardware > Filesystem, change Driver to Path, add Source /tmp/share and Target /hostshare. And mount -a.
Or add by command mount -t 9p -o trans=virtio,version=9p2000.L hostshare /tmp/share.
Related
I'm trying to run xv6 operating system on VirtualBox or VMWare in a Linux host. The official instructions said how to run the OS on qemu only. However, the official page (https://pdos.csail.mit.edu/6.828/2014/xv6.html) mentioned that xv6 can be booted directly on hardware also, but it's not clear how.
I want to boot xv6 on VirtualBox or VMware first. I extracted the following command from the Makefile, which runs xv6 from the command line after it's compiled using make command.
/usr/bin/qemu-system-i386 -serial mon:stdio -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp 2 -m 512
Please help me how to proceed. If the procedure is already documented some reference will be helpful.
The instructions are here which is linked (via 6.828 tools page) from your link though they are a bit terse:
Using a Virtual Machine
Otherwise, the easiest way to get a compatible toolchain is to install
a modern Linux distribution on your computer. With platform
virtualization, Linux can cohabitate with your normal computing
environment. Installing a Linux virtual machine is a two step process.
First, you download the virtualization platform.
VirtualBox (free for Mac, Linux, Windows) — Download page
VMware Player (free for Linux and Windows, registration required)
VMware Fusion (Downloadable from IS&T for free).
VirtualBox is a little slower and less flexible, but free!
Once the virtualization platform is installed, download a boot disk
image for the Linux distribution of your choice.
Ubuntu Desktop is what we use.
This will download a file named something like
ubuntu-10.04.1-desktop-i386.iso. Start up your virtualization platform
and create a new (32-bit) virtual machine. Use the downloaded Ubuntu
image as a boot disk; the procedure differs among VMs but is pretty
simple. Type objdump -i, as above, to verify that your toolchain is
now set up. You will do your work inside the VM.
I can see how one could read that and not see the answer.
After the virtual machine is installed, download the Ubuntu Desktop .iso. Install that into the VM and fire it up. Presumably the Desktop will provide a clear mechanism for loading your OS. (Wait, I'm giving it a try. Will update with the result.)
Turns out that is simply a Ubuntu client desktop, and isn't anything special for running a sub-operating system.
Looking around some more, I found the commentary to be the best potential clue. It contains this (head scratcher) phrase:
To run xv6, install the QEMU PC simulators. To run in QEMU, run "make qemu".
If only it specified the context to get to that point! (Sorry I am not more help.)
I see that you want to boot it on VirtualBox or VMware, but another option would be to using docker to run xv6. A great guide for getting started with xv6 through docker is here.
The full guide is elaborate and can help you with getting started.
It is an alternative option, but one that can get you going fast hopefully.
It will only take 4 steps to get going with the xv6:
Step 1
Download and set up docker here
Step 2
- Run this command in PowerShell or bash to pull the ubuntu image with xv6 docker pull grantbot/xv6
Step 3
- To run the docker image and get going with xv6 run this command docker run -it grantbot/xv6
Step 4
- Now inside the shell in the ubuntu image run cd /home/a/xv6-public/ to enter the root folder of the xv6.
Done
- Now you can compile and run the xv6 with make qemu-nox
Step 1.Compile xv6
Download the code, unzip it and enter the directory, compile the operating system image and root file system, the command is as follows:
make xv6.img&&make fs.img
Step 2. Write image to disk
Create two disks in a existed vmware virtual machine(my vmware version is 15.2.2, linux version is Centos7.8), the operation steps are: virtual machine settings -> add -> disk -> SCSI -> create a new virtual disk -> size 0.005 (allocate immediately, single file) -> name the disk "os", which means this disk is the operating system.
Create another disk named "fs" in the same way to put the root file system.
At this time, there should be "sdb" and "sdc" in the /dev/ directory (sda is the current operating system itself). If you do not see the "sdb" and "sdc", restart the guest operating system.
Write the operating system and root file system to the disk with the following command:
dd if=./xv6.img of=/dev/sdb bs=4k count=1000
dd if=./fs.img of=/dev/sdc bs=4k count=1000
shutdown the current virtual machine to ensure that the file has sync to the disk. At this time, the two images have been written to the disk, vmware saves the disk as a file, the location is in the directory of the current virtual machine, named os.vmdk, fs.vmdk, the next step will load these two files into the new virtual machine.
Step 3. Create xv6 virtual machine
To create an empty virtual machine, the operation steps are: customize (advanced) -> next -> install the operating system later -> choose other operating system type (choose other versions) -> take the virtual machine name as xv6 (name depend on you) ) -> Then use the default configuration all the way to "Next" to completion.
Right-click the created virtual machine and delete the disk created by default. Add the disk file created in the previous step to the current virtual machine. The operation steps are: add -> "disk" -> ide (note that this is an IDE instead of a SCSI disk, because xv6 reads an IDE format disk) -> use an existing virtual disk -> select the os.vmdk generate in the step 2->complete
Add fs.vmdk in the same way. Note that you must add os.vmdk first. Because os.vmdk is the operating system, it needs to be the first hard disk.
Now, you create a virtual machine which has two disk. one is os disk, another is root file system disk, all is ready.
Start the virtual machine, and the xv6 will start successfully.
I am writing a very basic RPM that does nothing more then drop off a simple GUI onto a system. It require nginx, drop some code into it's html directory, and drops a conf file into it's conf.d directory. Most of the time this will likely be run on a VM or fresh box with little else installed.
While testing my RPM I noticed that the nginx that it installs fails out of the box. The problem is that it's default.conf directory uses IPV6 address instead of IPV4 and the machine does not have an IPV6 address set, I gaurentee none of the machines this code is installed on will ever have IPV6 set.
The fix is very simple, but my question is about good protocol. I'm guessing it would usually be considered wrong to have my RPM modifying the default.conf of the nginx file to fix the line causing the exception, but at the same time if I don't my RPM will not function out of the box without someone manually making a tweak to the configuration files. How 'wrong' is it to overwrite the default files if I'm mostly confident that I'll be installed on machines that don't have iPV6 addresses?
I'd check if you can drop something in conf.d to override the bad settings.
Otherwise...
Your %post can modify it with something like sed. Then put a flag there indicating you did, so your %postun can try to clean up afterwards.
I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.
I'm in the process of ditching MAMP in favor of a Vagrant Ubuntu VM + Puppet for my Wordpress development workflow. Ultimately, I would like to reuse the puppet provisionning on the staging and production machines.
Vagrant defaults to running the Puppet files (modules, manifests, hiera.yaml) under /tmp/vagrant-puppet-3. However, my understanding is that on a real server one would put these files under the default Puppet directory which is /etc/puppet/.
I understand that Puppet will work fine under /tmp/vagrant-puppet-3, however I would like my developement machine to be as close as possible to the future staging / production machines.
So ... my queston is : How can you get Vagrant to create and run the puppet files from /etc/puppet/ ?
The Vagrant Puppet provisioner has three variables of interest. You can specify the manifest_file to identify the starting manifest, the manifests_path to identify the manifests directory and the modules_path to provide a hash of module paths. For your example, the relevant portion of the vagrantfile might look like this:
local.vm.provision :puppet do |puppet|
puppet.manifests_path = "/etc/puppet/manifests"
puppet.module_path = "/etc/puppet/modules"
puppet.manifest_file = "/etc/puppet/manifests/site.pp"
puppet.options = [
'--verbose',
'--debug',
]
end
More details here: https://docs.vagrantup.com/v2/provisioning/puppet_apply.html.
One option here is to use a tool like librarian-puppet to auto-install your Modules into /etc/puppet/modules/ on the Vagrant VM. Librarian-puppet is able to download specific versions of any Puppet modules from GitHub or PuppetForge.
Here's a wonderful example of how you could implement this in practice (including a sample VagrantFile, etc.): https://github.com/purple52/librarian-puppet-vagrant
In the above example, pay close attention to the main.sh shell provisioning script for Vagrant. It is what actually installs librarian-puppet on the VM, and then calls librarian-puppet install (to install the configured modules).
After librarian-puppet has initialized the VM, you can simply use the normal Puppet provisioning in Vagrant to actually load/run the modules, e.g.
# This assumes you have a "puppet/manifests/main.pp" script
# under your vagrant folder
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "main.pp"
end
I am trying to convert Varying Vagrant Vagrant's wordpress-trunk (or development) site to be provisioned via git instead of svn.
There seems to be a script (I presume it is a script even though it has no file extension) as part of the VVV project that will switch after the machine has been provisioned:
https://github.com/Varying-Vagrant-Vagrants/VVV/blob/master/config/homebin/develop_git
And the author told me that running the following from command line should do it:
vagrant ssh -c "develop_git"
but when I run that I get the following error:
Unknown cipher type 'develop_git'
There appears to be some code in the provision script that mentions git, but I have no idea what I am looking at.
So, does anyone know how to run/implement that script? Or otherwise convert the www/wordpress-trunk folder to git? Are there options somewhere to direct VVV to provision the trunk folder from git in the first place?
Contrary to the Vagrant-Documentation of vagrant ssh the -c option is delivered to the ssh command and therefore interpreted as the cipher-specification.
I would suggest you to try vagrant ssh -- "develop_git", since everything after "-- (two hyphens)[is] passed directly into the ssh executable".