Adding nodes to Cloudera Quickstart VM - cloudera

I have Cloudera Quickstart VM installed and it is single node. How can I add multi nodes to it and make it as a cluster ? I am now using Virtualboax and tried to clone the base quickstart VM and then network it and use the Add Cluster wizard in Cloudera manager. But it is failing. Anyone knows how to add multiple nodes to it ?

Your quickest/easiest option (instead of using a VM) is to install the multi-node version of QuickStart for Docker:
http://blog.cloudera.com/blog/2016/08/multi-node-clusters-with-cloudera-quickstart-for-docker/
Or, you could use Vagrant to set up a virtualized multi-node cluster:
http://blog.cloudera.com/blog/2014/06/how-to-install-a-virtual-apache-hadoop-cluster-with-vagrant-and-cloudera-manager/

Related

How to run ESXi on Openstack under a KVM VM

We run Openstack with KVM as hypervisor and now need to run ESXi 6 or 7 inside a VM (nested virtualization). This is mainly for converting disks to proper vmdk disks, not really running any VMs under ESXi (that is why we are not using a barebone and run esxi as hv)
We run this very same setup under Proxmox without bigger issues, the main point was using the vmxnet driver for the NIX. That is exactly where we fail with Openstack. It seems there is no such driver, using e1000 does not working. Booting the installation iso leads to 'no nic found' in the very end.
We are using Openstack Xena with Debian-Buster as compute (running libvirt) on kernel 5.10/5.14.
Any hints how to get this up and running?
Using https://github.com/virt-lightning/esxi-cloud-images i managed to get it working for 6.5/6.7 but not 7.0.
One seems to not be able to install ESXi via ISO on the an OpenStack instance itself (directly), since no matter if you use e1000 (6.x) or e1000e (7.x) for the installation, the installer will not be able to find the NIC during the installation. Also for the 6.x installer under Openstack, it could not find any disks (with or without the SATA flag).
Instead, I used the repo above to build an pre-installed esxi images shipped via qcow - it is build on my local machine and thus my local libvirt. Not sure yet why this makes a huge difference, maybe the nova based abstraction or something else hinders Openstack (no verification yet).
Building the 6.5/6.7 based qcow2 image locally, importing it via glance (ensure you use e1000 for 6.x and e1000e for 7.x) and then creating a new instance.
This will get you up and running on 6.5/6.7 with proper DHCP and network configuration.
For 7.x the interface is detected, but somehow DHCP is not working. I tried with q35 and different other options, but could not get 7.x to work until know.
I created a fork at https://github.com/EugenMayer/esxi-cloud-images to
proper expose credentials one can login
remove ansible zuul usere with a predefined public key by the author
cleanup the readme

cloudera installation process and clustering in local network

How to install cloudera in local system .I'm using centos 6.5.And also I want to do clustering in cloudera .Any one suggest me some documentation to this process properly
You can either use VM or Docker. Please follow the instructions here to get it installed locally.

How could get data from volume attached to instance

I am new in openstack. I have a volume attached to an instance. The instance was running perfectly. I update the Ubuntu version 16.04 and afterwards restarting the instance, it is stuck in grub rescue mode. If i goto boot directory there i cannot see any files. I do not know how to solve this.
Furthermore, a volume is attached to this instance. I need the data in this volume. Is there any way to copy this data from volume into openstack machine. I can access the openstack machine by ssh. Openstack machine means where the ubuntu openstack is installed.
Thanking in advance for all the support

Openstack and devstack

Does devstack completely install openstack? I read somewhere that devStack is not and has never been intended to be a general OpenStack installer. So what does devstack actually install? Is there any other scripted method available to completely install openstack(grizzly release) or I need to follow the manual installation steps given on openstack website?
devstack does completely install from git openstack.
for lesser values of completely anyways. devstack is the version of openstack used in jenkins gate testing by developers committing code to the openstack project.
devstack as the name suggests is specifically for developing for openstack. as such it's existence is ephemeral. in short, after running stack.sh the resulting ( probably ) functioning openstack is setup... but upon reboot it will not come back up. there are no upstart or systemd or init.d scripts for restarting services. there is no high availability, no backups, no configuration management. And following the latest git releases in the development branch of openstack can be a great way to discover just how unstable openstack is before a feature freeze.
there are several vagrant recipes in the world for deploying openstack, and openstack-puppet is a puppet recipe for deploying openstack. chef also maintains an openstack recipe as well.
Grizzly is a bit old now. Havana is the current stable release.
https://github.com/stackforge/puppet-openstack
http://docs.opscode.com/openstack.html
http://cloudarchitectmusings.com/2013/12/01/deploy-openstack-havana-on-your-laptop-using-vagrant-and-chef/
and ubuntu even maintains a system called maas and juju for deploying openstack super quickly on their OS.
https://help.ubuntu.com/community/UbuntuCloudInfrastructure
http://www.youtube.com/watch?v=mspwQfoYQks
so lots of ways to install openstack.
however most folks pushing a production cloud use some form of configuration management system. that way they can deploy compute nodes automatically. and recover systems quickly.
also check out openstack on openstack.
https://wiki.openstack.org/wiki/TripleO
I think the code should be same, but at least the configuration is not same, for example, devstack will by default use nova network. In a manual installation, you can choose neutron. so:
if you are starting to learn openstack, devstack is a good starting point. with it, you can quickly have a development env.
if you are deploying openstack env, devstack is not a choice, and
instead you need install it following the installation guide.
If you would like another scripted option for deployment, you can try Packstack. This will work only on Fedora and RHEL.
https://wiki.openstack.org/wiki/Packstack
https://www.rdoproject.org/install/quickstart/
In this, you can choose which services you would like to install. For example you may choose to install Neutron for networking purposes, instead of using nova.
Also, it lets you deploy multiple instances of compute nodes by just providing it's IP !!
Yes. Devstack is a tool which help you build all in one for Openstack environment in quickly (Just take a coffee cup and wait until complete). Normally they were using for developer to develop new features and/ or test code quickest. For operator, we need to setup by manual step by step for each services.
To build via devstack repo then you need pull newest source-code from http://git.openstack.org/openstack-dev/devstack. then create new local.conf in devstack folder. And run ./stack.sh.
For example local.conf: https://github.com/pshchelo/stackdev/blob/master/conf/local.conf.sample
Yes, Devstack install all the components of Openstack. But when you use basic configuration then it will install core components of openstack which are the base of openstack cloud platform to run some basic things.
And in Advance configuration of openstack you should configure your local.conf file for what type of services and components you want to install or use in your cloud.
https://github.com/openstack/tacker/blob/master/devstack/local.conf.example

How to use Docker (or Linux Containers) for Network Emulation?

Edit: As of March 2019, although I have not tested it, I believe Docker now has the ability to do real network emulation.
Edit: As of May 2015, SocketPlane (see website and repo) has joined the Docker team and they're in the process of integrating their OVS solution into Docker core. It appears as if theirs will be the winner of the various Docker networking solutions.
So I've been using Mininet to run tests on my networking software. It seems to have hit its limits though as Mininet containers are essentially linux containers with only a networking stack. I'd like each container to have its own networking stack, file system AND set of processes - basically I'd like a container as close to a VM as possible. Which brings me to Docker, as I understand, Docker is opposite of Mininet, its containers have a file system and their own processes but not their own networking stack. I'm leaning towards Docker as it has a nice API for forking containers, using the disk space of only the diff. My question is, is it possible to create a set of linux containers (with Docker or similar) with the following container layout + network interface setup?
You can use Pipework for that purpose. It is specifically one of the scenarios it implements (private networks between containers, in addition to the standard Docker network).
I am aware of two open-source network emulators that use linux containers:
The CORE Network Emulator uses containers and each container has its own filesystem (or partial filesystem, because it only creates mount namespaces for the directories required by the services running on each node).
The VNX network emulator is another option. It uses either KVM or LXC to create virtual nodes (but I have not tried the LXC option, yet).
CORE Network Emulator does have a Docker Service that I contributed and wrote an article about. The initial version that is in 4.8 is mostly broken but I have fixed and improved it. A pull request is on GitHub.
The service allows you to tag Docker Images with 'core' and then they appear as an option in the services settings. You must select the Docker image which starts the docker service in the container. You then select the container or containers that you want to run in that node. It scales quite well and I have had over 2000 nodes on my 16Gb machine.
You mentioned OVS as well. This is not yet built in to CORE but can be used manually. I just answered a question on the CORE mailing list on this. It gives a brief overview of switching out a standard CORE switch(bridge) with OVS. Text reproduced below if it is useful:
Not really used openvswitch before but had a quick look.
I installed openvswitch via my package manager (Ubuntu 15.04):
sudo apt-get install openvswitch-switch
I then built a very simple network in CORE 4.8. 2 PCs connected to a switch. I started the emulation in CORE. Then on the host I looked at the bridges that had been set up:
sudo brctl show
bridge name bridge id STP enabled interfaces
b.3.76 8000.42c789ce95e9 no veth1.0.76
veth2.0.76
docker0 8000.56847afe9799 no
lxcbr0 8000.000000000000 no
I can see the bridge that represents the switch is called b.3.76 and has interfaces veth1.0.76 and veth2.0.76 attached to it. I delete the bridge:
sudo ip link set b.3.76 down
sudo brctl delbr b.3.76
I then set up the openvswitch bridge:
sudo ovs-vsctl add-br b.3.76
sudo ovs-vsctl add-port b.3.76 veth1.0.76
sudo ovs-vsctl add-port b.3.76 veth2.0.76
I can now ping between the nodes so the switch seems to be working. I have not tried to do any further configuration of openvswitch.
When you stop the CORE emulation it does not obviously delete the openvswitch bridge or ports so you have to do that by hand:
sudo ovs-vsctl del-port veth2.0.76
sudo ovs-vsctl del-port veth1.0.76
sudo ovs-vsctl del-br b.3.76
This would be relatively simple to automate with a script or with a little bit of work could be integrated in to docker.
Hope this helps
#jpillora IMUNES network emulator uses Docker for their L3 (PC, Router, Host) and Open vSwitch for the L2 (Hub, Switch) nodes. For example, the Router node is actually a Docker container with a Debian Jessie system that runs Quagga automatically configured so you just have to draw the nodes inside the GUI. You can then access those containers by double-clicking on them and do whatever you would do on a Linux system. It uses a "special" Docker image available on Hub called imunes/vroot that uses a dummy init process so it doesn't terminate. Technically, with a bit of tweaking you can replace it with whatever you want. Its source code is available on Github.
I think it would be appropriate for you use case.
I tried CORE and a few others but found them hard to set up and run (especially in AWS or on Mac). They are probably powerful but overkill if you just want to simulate simple networks.
Hence I wrote YANS (Yet Another Network Simulator). YANS is based on Docker. Even I myself am surprised to see how fast it runs. Give it a shot!

Resources