I have setup my openstack dev environment. Now i want to write hello world program (for example i want to write a hello world program in a file say test and when i run nova-manage test it should print Hello World). i looked into web for programming guide, all i found was installation and admin manual. I even went through question openstack Hello World , wasn't helpful. I could use some help...
thanks in advance..
So by openstack dev environment I assume you mean something like devstack ( devstack.org ).
And by openstack I assume ( since you referenced nova-manage ) you are using the nova component of openstack.
nova is a cloud compute controller. it effectively acts as an API for managing virtual machines. Usually in linux this means kvm or xen hypervisor enabled virtual machines. But it is not constrained to this.
By default devstack uses kvm as it's hypervisor of choice.
Openstack will allow you to launch 'instances' once you have loaded images into the glance imagestore. These images function like templates for virtual machines. When you launch an instance based off an existing image you will receive a running virtual machine within your project in openstack. You can ssh to that instance and use it just like any other linux box if the image you are using is a linux image.
Ubuntu cloud services have a list of available images that are compatible with glance and can be freely downloaded.
So... at this point in the explanation I have to assume you think that openstack is something like cloud foundry. It is not. Nova provides IaaS solutions. Infrastructure as a Service. Not PaaS / SaaS as something like cloud foundry would.
Does this make sense?
Related
I am looking into documentation for running hydra on a single node remotely. I am looking for methods where I can run a code present in my local machine and to run it on a GCP instance.
Any pointers?
It sounds like you are looking for a Hydra Launcher that supports GCP.
For now, Hydra does not support this. We do have a Ray Launcher that launches to AWS and could be further extended to launching on GCP. Feel free to subscribe this issue.
Objective : My objective is to generate a report with all the VMs running on the openstack instance (from all projects) with its flavor size. My Environment is running Red hat openstack OSP 8 (Liberty).
Question/Issue: Is there a way to get the server and its flavor size from all projects? We can get the server list from all projects using OpenStack server "openstack server list --all-projects" but this does not give the flavor size of each VM.
I thought of writing a simple for loop taking the server list output and passing it to openstack server show command but the server show command does not show details from other projects,it only shows for admin.
Basically, I need the report similar to the table in "Horizon -> System -> Instances" (dashboard/admin/instances/) which shows the instances from all projects. I would prefer to stick with cli tools to generate the info.
Appreciate any pointers.
I got it working using nova cli "nova list --fields name,flavor --all-tenants". I could not find any options to list flavor using openstack unified cli.
I am trying to setup cloudify in an OpenStack installation using this offline guide.
This guide does not specify much about cloud platform so I have assumed it can be used on OpenStack environment. I am using simple manager blueprint YAML file for bootstrapping .
I have the following questions:
Can I use fabric 1.4.2 with cloudify 3.4.1 ?
If not, I am unable to find wagon-file for fabric 1.4.1.wgn file
Architecture: Can I use CLI inside a network to bootstrap a manager within that network? And this network lies inside OpenStack environment. Can cloudify CLI machine, cloudify Manager and application reside within one network inside openstack? If so, how? Because we would like to test it inside one single network.
(Full disclosure: I wrote the document you linked to.)
Yes you can.
You can find all Wagon files for all versions of the Fabric plugin here: https://github.com/cloudify-cosmo/cloudify-fabric-plugin/releases
Yes.
I'm newbie for these techs (open stack / docker / vagrant), not sure if I understood them correctly (most likely did not), for me I understood it is something like having a portable application to run it with same development configuration to ensure all the development team have same setup, but did not understand, what after development, and how to get benefit from them with dart app.
my question is:
1. Correct my understanding
2. Do I need the end user to have these things installed in his system, and run my application through them, same as in the development stage?
3. How can I build/develop/distribute dart lang app through them, may be as hese as well as dart are new, I could not find enough info while googling.
thanks
Docker is similar to a virtual machine like VM-Ware or Virtualbox as it creates an abstraction layer between the host operating system and the operating system running within a Docker container. The difference is that Docker doesn't emulate the entire hardware. The disadvantage is that Docker only runs on Linux and only Linux can be run inside Docker. If your host is an Intel system you can't run an ARM Linux inside the container. (theoretically you can run Virtualbox inside Docker and run Windows. or other OSes in it)
With Docker you can test your application locally in the same environment as the application will run when deployed.
When you for example create an application you want to run in Google Compute Engine you install and test it locally inside a Docker container and then deploy the Docker container to Google Compute Engine as a whole unit. When there is a bug in the deployed application you should be able to reproduce it locally as well because it's just a 1:1 copy. No bug could have been introduce because the operating system or other dependencies were installed differently on the deployment environment than in the develeopment/test environment.
The Dockerfile is a set of instructions to set up a Docker container. If you want to create a new Docker container (for example for a new developer) you just let Docker process the Dockerfile and a new Docker container is created from it. This allows to easily create new Containers.
If you want to update one dependency to a newer version or want to add remove components to/from the environment you change the Dockerfile and create a new container from it. This way you avoid that manual addition/removal form/to an existing container manually lets containers of different developers/testers/deployment diverge from each other.
I haven't used OpenStack myself but from the web page it seems to provide components and tools to build and manage your own cloud infrastructure.
I also haven't used Vagrant myself but it seems to help to automate a lot of tasks related to creating and managing virtual machines like VM-Ware, Virtualbox, Docker and probably others.
When you have for example a server application it probably consist of a number of components you don't want all to run in one container but split up into several containers. One container for the Database, one for the web server, one for the backend application (created in Dart for example), and others. It can become cumbersome to manage all those containers. Vagrant helps to automate related tasks.
We used Fuel v4.0 installer to deploy the OpenStack (Havana on Centos) environment. We have the OpenStack two NIC deployment in High Availability(HA) mode with three controller nodes, one compute node running the nova-network (FlatDHCP manager)networking service.
We wanted to shift the current setup to Neutron from nova-network. The backend for glance is swift. Is there a way to backup the vm images that are currently used in nova-network so that the same can be used in the Neutron setup?
It would be great if anyone could help us on this.
Thanks,
Sonia