I am developing a distributed file system using Java, I cannot give many details at this moment. I need to test some things on Linux, I will use WMWare server an install Linux inside a virtual machine. Is there any difference between the simulated network card and a real ethernet interface?
I am developing a distributed file system ... I will use WMWare server an install Linux inside a virtual machine.
VMware is great for this sort of thing. There should be no difference except, as RichieHindle said, in performance, especially if you're planning to run multiple vms on the same server.
Use real hardware if you want usable performance benchmark results.
Java is it's own 'VM'... on top of a layer of virtualization in the guest OS... on top of VMware... on a virtual execution model CPU. Take a little virtualization here, add a little virtualization there, and pretty soon we're talking about some real abstraction!
From the point of view of application code, no, there's no difference.
The only visible difference might be in performance - the speed of response and exact timings of things might be different, but you're talking microseconds.
There's so much general-purpose software that works flawlessly under VMs that the answer to almost every question of the form "Are VMs different from real machines?" at the application level is "No".
(Things might be different if you were talking about kernel-level driver software.)
Related
I am beginning to study the use of virtual machines with realtime applications, specifically network applications.
While I do understand the limitations and concerns, I'd like to get ideas as to how to get started on this task.
I am going to use a DPDK sample application over Linux, and probably use VMWare for starters. However, I do not know what my first steps with respect to setting up VMWare should be.
First I think it is better to use open source solution like QUEM/KVM for your virtualization platform. Many platform exist for run high performance network functions on virtualized platforms you can see OpenNetVM for example in order to get basic ideas.
With OpenStack's architecture, is it possible to, for instance, have a PowerPC64 (Altivec) machine, a Intel CoreDuo machine, and a ARMv6 all on the same cluster?
Or is this impossible, because of the restrictions in building buildpacks when deploying to multiple architectures?
EDIT: Whoops, I meant OpenStack, not OpenShift ;)
The answer above is correct (answer from developercorey).
Although whether this suits you depends on how its managed and what your trying to achieve. Typically when you add servers with different physical attributes such as CPU, Disk, Network cards etc you group them into different host aggregates.
By default when you launch a VM it will try and find a suitable host, but you can also tag it, so for example if your VM required alot of disk IO, you might want to place it on a host that has SSD drivers. So you can put those hosts into a 'SSD' aggregate, and then when launching your VM you can make sure it goes to a host in that aggregate.
If your just trying to make the most out of the hardware you have, then I don't see any issue by mixing them.
I don't think that they have to be, but I do believe that they only build packages for 1 or 2 architechtures, so I'm not sure how many options you really have there.
I am a newbie with Xen.I want to know how does Xen work.
It's really a puzzle when facing the code and I don't know where to start.
Are there some easy articles for me?
Since you mention looking at the code, I assume you want to understand the technical details of Xen and not just merely how to start a VM.
As with all problems, start with something simple and then work your way up. Some pointers:
Be sure to have the prerequisite experience under your belt. In particular, strong C and Linux affinity, but also x86 paging and virtualized memory workings.
Make sure you have a sound grasp of the general Xen architecture. For instance, paravirtualized versus hardware-supported virtualization, the special role of the management domain (Dom0) compared to unprivileged domains (DomU), etc.
Investigate the the Xen components running in Dom0:
The Xen control library (libxc) which implements much of the logic relating to hypercalls and adds sugar around these (look in tools/libxc).
The swiss army knife for administrating Xen, namely the Xen light library (libxl). This library replaces the deprecated xm tool with the xl tool and takes care of all your maintenance tasks such as starting/stopping a VM, listing all running VMs, etc. For all these operations, it works in tandem with the aforementioned libxc. (Libxl lives in tools/libxl.)
The Xenstore is a tree-like data structure from which all running domains can retrieve and store data. This is necessary since all I/O goes through Dom0 (not the hypervisor!), and domains need to communicate with Dom0 how they are going to pass I/O along. (Look in tools/xenstore.) You can inspect the Xenstore with a tool such as xenstore-ls.
the blkback/netback kernel drivers which pass the data over shared channels to the VMs. (You will find these drivers in a recent Linux kernel (e.g. >= v3.0) that has so-called PVOPS support).
Take a look at the console daemon (tools/console). Note that sometimes the Qemu console is actually used. Qemu also comes in the pictures as a default backend for if you choose a file-backed virtual storage for a VM.
Experiment with the 'Xen-way' of inter-VM communication: Grant tables, event channels and the Xenstore. With these fundamentals you can create your own shared channel between VMs. You can do this, for example, with writing a kernel module that you use in two domains to let them talk to each other.
I can also give some pointers in the source that you can check out:
xen/xen/include/public/xen.h will give you a list of all the hypercalls with comments what they do.
xen/xen/include/xen/mm.h gives you an introduction to the different memory terminology used by Xen (i.e., real versus virtualized addresses and page numbers). If you don't grasp these differences, then reading the hypervisor code will surely be frustrating.
xen/xen/include/asm-x86/config.h gives an overview of the memory layout of Xen.
xen/tools/libxc/xenctrl.h exports a large list of interesting domain control operations, which gives an abstract view of task division between Dom0 and the hypervisor.
Last but not least, the book 'The Definitive Guide to the Xen Hypervisor' by David Chisnall comes highly recommended. It covers all these topics and more in a thorough, technical fashion with plenty of code examples.
The Xen wiki and developer mailing lists are also a great resource for understanding Xen.
If you have a more specific question, then I can give you a more specific answer.
Here are few links which will guide you with ZEN Start up.Hope they will be useful.
http://www.howtoforge.com/howtos/virtualization/xen
http://wiki.xen.org/wiki/Category:HowTo
http://wiki.debian.org/Xen
For me, that is the best and more concrete tutorial with examples and step by step to start. I used it when I started.
Then you can read a lot more on Xen documentation itself or some books but as a starting point that allows you to easily install and test Xen, I choose that tutorial from Debian Wiki.
If you just want an overview, you may read this: http://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide.
This will introduce you to Xen hypervisor, suggest configuration to set up virtual machines, provide information about the networking and finally have details about tools for the management of virtual machines.
This documentation is to get the Xen specifically on ubuntu (Most importantly, it works!)
https://help.ubuntu.com/community/Xen
===
However, if you want to go to the next level and understand the working of Xen; Xen architecture, memory management, device management, CPU scheduling etc., I would recommend reading the book "The Definitive Guide to the Xen Hypervisor".
I want to ship a piece of hardware to clients that they plug in to their network via Ethernet or USB. This device contains an ASP.NET web application that they access via a web browser on any PC in their network.
This needs to be a small device that costs less than $500, meaning it can't be a full server with a Win2008 server license. This would be repeated hundreds or thousands of times - once for each new customer.
Are there external hard drives or NAS devices that can run as an IIS/ASP.NET web server?
Thanks,
Roger
If you stick with a PC setup, you might be able to use a desktop OS and IIS Express. It should support everything you want, you might even be able to get this on a cheap netbook.
I'm sure you could build a small PC based an an embedded motherboard, or even a mini-itx board. But, this is a programming Q&A site and not really the place to ask about building servers.
If you're looking into keeping it cheap I would highly recommend looking into Mono which is free and runs ASP.Net very well. If you have any Windows-specific things you'd need to possibly change those but hopefully you wouldn't have those on a website.
You should look into converting your app to Mono.Net running on a virtualized environment. The OS and the runtime environment would be open source and would allow you to freely distribute it.
Mono.Net
Virutal Box - VM Enviornment
Ubuntu Linux OS
Buy a netbook with windows 7 home premium on it as that bundles IIS7. If you need any more "capacity", then you should look at bigger hardware anyway.
It's one of those things I see a lot but never really think of. Do you think for the purpose of web application development (specifically ASP.NET WebForms/MVC). Do you think it's advantageous to do such a thing and if so, what kind of advantages come out of it?
By virtualization I mean using products like Hyper-V to separate the server context like your SQL and Web Server, etc.
First question is, virtualization of what? Do you mean server virtualization? Do you mean running VMWare on each dev's laptop with multiple OSes? Do you mean moving everything to the cloud?
Virtualization of servers, in web app context, is not really different from that in general IT - most of the servers on the Internet, including StackOverload's, are bought to handle peak loads and spend most of the time idling away the cycles, so virtualizing them makes sense when you have more than a certain amount.
VMWare on the desktop (or other parallels on other operating systems) is superb because a) your devs can run a full instance of your server environment, including multiple virtual servers connnected in a virtual network - this is about as close to the real thing that you can get, minus hardware costs and minus devs messing with each other's servers. For clients, you can use Linux and multiple Windows installs to test various browsers, font sizes, etc. quickly - also a big win.
Moving everything to the cloud makes sense in many cases, but is probably a topic for a separate full-sized question :)
One big advantage I see is, that every developer can have his/her own sandbox to work on. If someone messes up his/her sandbox he/she can take a clean image and all is OK again. So I guess that means that there is room to experiment without losing valuable time getting back to the normal setup, you can simply do a rollback.
I'm in doubt a bit on whether you should use virtualisation for production environments. Depending on the application of course.
The only time I would use a virtual for ASP.Net development was if the app required specific setup, such as relying on installed software, wierd settings or particular shares. Every developer has their own webserver and can run their own database so if it's a "basic" webapp I don't see much value in virtuals.. it's pretty hard to break anything with a basic web app deployment :)
With a virtual server, you can test your code in a production-like environment. It is also possible to quickly revert back to the original setup. For many applications, it is useful in that time period just after you write the code, but before it goes to production.
I'm a fan of virtualizaion and use it in testing and production (VMWare and Hyper-v) but over the last year I find it less important on a dev machine. TFS provides me with all the backup/rollback ability that I need, multiple versions of .net can now exist on the same machine and VS2008 can target all those versions.
In a development environment a virtual environment is useful to put several different servers on one box, you can have an instance for your web app, one for your services, one for database, etc. That way it mimics your production environment if you are using separate servers.
One of the benefits of using virtualization in production is that your application is not tied to a specific machine. If you wanted to move your web server instance to another box, it is trivial to do so. You don't need to install or configure things on the new server and hope that everything is set up properly.
One problem I have had though in testing virtual instances is that it can run slower for some applications, specifically engineering apps that like running the CPU at 100%. So test before you leap.