I wish to install a VM on my Xen Project machine that will run a Zentyal Firewall. My machine has three networks cards: one integrated, and two discreet, similar cards (they have the same Realtek chip, but are from different manufacturers). For the firewall to work optimally, what I want to do is assign and dedicate the two discreet NICs to my firewall VM, and use the integrated card for Dom0 and other VMs. I have been able to do similar things with other virtualisation software in the past, but have not been able to find a way to do it with Xen Project.
This page provides many useful configurations, but I don't think any of them match what I want to do. Is this at all possible, or must I give up hope of virtualising my firewall computer?
I think the best way to solve this would be using PCI passthrough in Xen. What this means is that you can leave 1 of your NICs attached to the dom0 (which can then be bridged to allow the other VMs to connect through the same interface - look at one of the Xen articles on network configuration for some examples of how to set this up, it'll be the same as if you only had a single NIC) and allow the firewall VM full control over the other two NICs.
The process for this is somewhat involved and can vary by distribution so I would advise you check the first article I linked but I will describe the basic process.
Check the PCI addresses of the two network cards you want to pass through using lspci. The lines of output for your cards will look something like the following (although the details will be very different the structure will be the same):
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
00:19.1 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
Make a note of the first column (00:19.0 and 00:19.1 in this example). Add this to the config for your firewall VM in the following format:
pci=['00:19.0','00:19.1']
On its own this will cause the VM to fail to boot as it will be unable to pass through the devices. In order for the devices to be passed through they will need to be bound to the pciback driver on dom0 with a command like:
xl pci-assignable-add 00:19.0
xl pci-assignable-add 00:19.1
This may not be possible in all situations but there are other methods if it is not. I strongly advise you to read the article I mentioned before to fully understand what the best way to do this is in your case.
Related
I'm a noob learning openstack. And The resources are all over the place tbh. I came across this image and would like to know one thing,
So, Suppose I have 100TB of storage and 10 server grade processors, and ram of 1TB, do all these resources make up of only one base OS- RED hat enterprise Linux? So, they sell resources to connect all the equipment and connect to install one single OS which can comprehend them all?
And Upon this, we throw an Openstack architecture so clients can use them as needed? Do we need as many NICs or the NICs virtual?
How to scale?
As you say, you just add a server. Install RHEL or another supported Linux distro (it's best to install the same distro and version on all servers), then OpenStack and configure it. The new server will register with the OpenStack controllers and can be used for launching virtual machines immediately.
The process is a bit more involved when you run a cloud with baremetal instances (i.e. you don't launch VMs but provision physical systems), but in principle it's the same.
by definition(at consumer scale-like one laptop) we need a network interface card for one IP
This is incorrect. You can configure multiple IP addresses on a single interface, even on your PC at home, even if that PC runs Windows.
An enterprise cloud requires connecting nodes to several networks. Usually, servers have several physical NICs, bond them together, and use VLANs or other multiplexing technologies to implement the networks. See this blog (five years old, but the principles still apply today, and it's well-written) for a good example of a real-world OpenStack network architecture.
Openstack uses one big special NIC
OpenStack can be deployed in many ways. It is not a shrink-wrapped solution. It can be used on servers with single NICs, bonded NICs, VLANs, normal networks, etc. Your statement is almost correct if you think of a typical deployment and a bond interface as a "big special NIC".
If you are interested to try this out at home, see the OpenStack installation tutorial. You will learn a lot.
How to make two VMs communicate with each other? I have to split a task between two VMs, so I think MPI has to be used, If so are there any useful resources that I can use to get started? Any help would be appreciated.
P.S : I have instaled devstack juno
Your question is not really clear.
Openstack is just a virtualization technology. There's almost no difference between having two hardware servers and two VMs. E.g. normally if two servers belong to the same network segment they will have access to each other's open ports. Openstack works just in the same way - if you assign the same network to VMs then this will also work.
However if you wish to install two VMs that will consume from a list of tasks and do them in parallel I would recommend you to read about Enterprise Integration Patterns (e.g. here). Technically this is implemented by using one or several messaging middleware servers such as ActiveMQ or ZeroMQ.
We are considering Riak for use in an embedded device context (embedded Linux) where devices are dynamically addressed (DHCP).
Is this a viable choice?
We can assume that appropriate auto-discovery protocols are in place to enable devices to discover each other. Upon joining the network, a device would obviously need to do a riak-admin cluster join <other device>. Other than this, would Riak be capable of handling devices leaving and re-joining the network on a fairly non-frequent basis? Or, does it play much more nicely in a statically-addressed environment?
DHCP doesn't necessarily mean the device has to join when it boots. If the node names are resolvable via DNS or hosts file, and the listeners are configured to 0.0.0.0, the Riak nodes should communicate quite happily even if their IPs change on reboot.
I was just curious if i can connect any two interfaces of any two vms running on the same box using vmware player as if the two interfaces were connected by network wire. I need this setup for simulating some networking test.
Under workstation you can create very easily what you're after, however, in player, if you select host only there would be 3 things on the network, the host, and the 2 guests.
I don't know the reason of down voting.
Anyway i think that creating one more virtual network like vmnetX and then let the two interfaces connect to the same will solve the problem. Please rectify if i am wrong.
P.S: It is obvious you cannot connect the two vms using 'Wire' :)
Has anyone had any luck getting MSMQ multicast (PGM) to bind to a specific network interface using the MulticastBindIP registry setting?
MSMQ Multicast (PGM) always seems to bind to the first interface listed by ipconfig. In my case, I have VMware installed, so I have two virtual network interfaces (VMnet8 and VMnet1) as well as my network card. It isn't useful to have MSMQ send PGM packets to the VMware virtual interfaces.
I have attempted to use the MulticastBindIP registry setting (of course, restarting MSMQ after the change), but this does not seem to make any difference. For example, the IP address of my "Local Area Connection" is 172.18.224.245, so I set the following registry key value:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSMQ\Parameters]
"MulticastBindIP"=dword:ac12e0f5
The DWORD is stored with most significant bytes first. However, using wireshark, I can see that the PGM packets are not getting sent to this interface (but still sent to the first interface listed by ipconfig).
The documentation could be wrong, so I also tried variations such as: least significant bytes first and even using a (dot-delimited IPv4-style) string value. Nothing seems to make any difference. The only way I can get MSMQ multicast to bind to the correct interface is to disable all virtual interfaces. This isn't a workable solution.
If anyone is interested, "MulticastBindIP was introduced for Windows 2003 Server and not back-ported to Windows XP". Thanks to John Breakwell's help. See this Microsoft newsgroup discussion for more details.
The only solution I've found on Windows XP is to disable all interfaces except the "Local Area Connection". When you re-start the MSMQ windows service it will bind to the correct network interface (because it's the only one available). I suspect it's not that common to have multiple network cards in a machine running WinXP, but it is common to have VMware or VirtualBox virtual interfaces that expose this issue with MSMQ binding.
FYI, For more recent operating systems, where the MulticastBindIP registry setting is supported, there is some debate on whether the value is a DWORD or REG_SZ.