Secondary storage not recognized in apache cloud stack - apache-cloudstack

I am trying to setup a cloudstack (v4.4 on CentOS 6.5) management instance to talk to one physical host with XenServer (6.2) on it.
I have got so far as it setting the zone/pod/cluster/host and it can see the XenServer machine. Primary storage is also visible to it - I can see it in the dashboard. However it can't see the secondary storage and thus I can't download templates/ISOs. The dashboard says 0kb of 0kb in use for secondary storage.
I have tried having the secondary storage as local to the cloudstack management instance (whilst setting the use.local global setting to true). I have also tried setting up a new host and setting that up as the NFS share and it did not work.
I have checked in both instances that the shares I have made are mountable - and they are. I have also seeded them with the template VM by running the command outlined in the installation guide. Both places I set to be secondary storage had ample space available - 1 greater than 200GB. The other around 70GB. I have also restarted the management machine a few times.
Any help would be much appreciated!

You need secondary storage enabled in order to supply templates to your hosts. The simplest way to achieve that is to create an NFS export that is available to the host. I usually do it in the host it self. In your case that would be the XenServer. Then in the management server add the secondary storage in: Infrastructure -> Secondary Storage -> Add Secondary Storage.
Secondary storage is provided by a dedicated system VM. Once you add a secondary storage, CloudStack will create a system VM for that. Start by checking the status of the system VMs in: Infrastructure -> System VMs
The one you are looking for should be called Secondary Storage VM.
It should be running and the agent should be ready (two green circles). If the agent is not ready, first ssh to your XenServer host and then to the system VM using the link local IP (you can see the IP in the details of the VM) with the following command:
ssh -i /root/.ssh/id_rsa.cloud -p 3922 LIKN_LOCAL_IP_ADDRESS
Then in the system VM, run a diagnostic tool to check what could be wrong:
/usr/local/cloud/systemvm/ssvm-check.sh

Related

Docker Google cloud

I have a CentOS VM instance in google cloud and I have installed docker on CentOS. I have created a container with web interface. I am not able to access it When i try to access it from outside (In browser Other tab). What do I need to do to access it from outside of cloud?
There are several leaps between your browser your containerised web interface.
The first will be from the IP through the GCP firewall into the Instance, you might be getting stuck here, when you created the instance, in the Firewall section, did you select "Allow HTTP traffic and Allow HTTPS traffic"?
If you click through to your instance details in the GCP dashboard you can see under Firewalls if this is selected, also if you look under Network you can see which network profile your instance it using, you can click the network listed to check if it is set up to allow the traffic you are trying to send though.
If this all looks right and traffic is getting to the instance but not the web interface, it could be that the port from docker is not mapped to the port of the host, when you started the container did you use the -p option to map the ports?
If this is also right, then it could be that the Docker image is not exposing it's port internally, in the Dockerfile used to create the Image for the container is there a line starting with EXPOSE, or does if build FROM an Image that does?
There are more possible points of failure in this chain but I have tried to list some likely answers. If none of this helps then let me know in the comments and we can try and debug the issue.

How to encrypt docker images or source code in docker images?

Say I have a docker image, and I deployed it on some server. But I don't want other user to access this image. Is there a good way to encrypt the docker image ?
Realistically no, if a user has permission to run the docker daemon then they are going to have access to all of the images - this is due to the elevated permissions docker requires in order to run.
See the extract from the docker security guide for more info on why this is.
Docker daemon attack surface
Running containers (and applications)
with Docker implies running the Docker daemon. This daemon currently
requires root privileges, and you should therefore be aware of some
important details.
First of all, only trusted users should be allowed to control your
Docker daemon. This is a direct consequence of some powerful Docker
features. Specifically, Docker allows you to share a directory between
the Docker host and a guest container; and it allows you to do so
without limiting the access rights of the container. This means that
you can start a container where the /host directory will be the /
directory on your host; and the container will be able to alter your
host filesystem without any restriction. This is similar to how
virtualization systems allow filesystem resource sharing. Nothing
prevents you from sharing your root filesystem (or even your root
block device) with a virtual machine.
This has a strong security implication: for example, if you instrument
Docker from a web server to provision containers through an API, you
should be even more careful than usual with parameter checking, to
make sure that a malicious user cannot pass crafted parameters causing
Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site request forgery attacks if you happen
to run Docker directly on your local machine, outside of a VM). You
can then use traditional UNIX permission checks to limit access to the
control socket.
You can also expose the REST API over HTTP if you explicitly decide to
do so. However, if you do that, being aware of the above mentioned
security implication, you should ensure that it will be reachable only
from a trusted network or VPN; or protected with e.g., stunnel and
client SSL certificates. You can also secure them with HTTPS and
certificates.
The daemon is also potentially vulnerable to other inputs, such as
image loading from either disk with ‘docker load’, or from the network
with ‘docker pull’. This has been a focus of improvement in the
community, especially for ‘pull’ security. While these overlap, it
should be noted that ‘docker load’ is a mechanism for backup and
restore and is not currently considered a secure mechanism for loading
images. As of Docker 1.3.2, images are now extracted in a chrooted
subprocess on Linux/Unix platforms, being the first-step in a wider
effort toward privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes, each
with its own (very limited) scope of Linux capabilities, virtual
network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.
Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
Say if only some strings need to be encrypted. Could possibly encrypt this data using openssl or an alternative solution. Encryption solution should be setup inside the docker container. When building container - data is encrypted. When container is run - data is decrypted (possibly with the help of an entry using a passphrase passed from .env file). This way container can be stored safely.
I am going to play with it this week as time permits, as I am pretty curious myself.

woocommerce webhooks not firing

woocommerce webhooks aren't firing at all for me, even on a fresh install. I did the following:
Create a new MySQL database
Install WP from the zip file.
Set up WP.
Install Woocommerce.
Enable REST API and create a key.
Added "Coupon created" webhook, made sure it's set to active, and set it to a publicly accessible site.
When I create a coupon, the webhook does not fire, and no entry is created in the log. I tried this with orders as well and also doesn't work.
I think it's a machine configuration problem, but not sure what to change. The machine is an EC2 instance and has all ports opened in its security group policy.
Weirdest of all is that on a different EC2 instance does work, but it's a production machine and I want to have a dev server work so I can test out things. The only config differences between the production and dev machines that I can think of are the subnets and the firewall, but I don't understand why the subnet should matter and I opened all the firewall ports on the dev machine.
what Linux distributions are you running for prod and dev?
CentOS with SELinux enabled with not allow HTTPD scripts and modules to connect to network by default.
setsebool -P httpd_can_network_connect on
If above is not valid, please identify network problems by trying connecting to AWS RDS via SSH CLI. If you can open a connection via SSH CLI, the problem will be with your application. If you can't, it will be network problem. First thing to check in that case is AWS RDS security group. For testing you can open 3306 to public.
Let me know how it goes.

Running Kubernetes on vCenter

So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.
Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.
I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel.
My basic understanding is:
Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?)
Remove/disable the MASQUERADE rule installed by Docker.
Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs.
Some other setup is required to make use of load balanced Services and dynamic DNS.
Provision 5 VMs: 1 master, 4 minions
Install/configure Docker on all 5 VMs
Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons
Install/configure kubelet and kube-proxy on each minion and run them as services/daemons
This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.
I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.
How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?
I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.
For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful.
On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model .
From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection.
See also Setting up the network for Kubernetes for a nice answer.
Comments on your other points:
1.
All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
For anyone who is interested in doing the same, here is my current plan.
I found the kube-up.sh script which installs a production-ish quality Kubernetes cluster on your AWS account. Essentially it creates 1 Kubernetes master EC2 instance and 4 minion instances.
On the master it installs etcd, apiserver, controller manager, and the scheduler. On the minions it installs kubelet and kube-proxy. It also creates an auto-scaling group for the minions (nice), and creates a whole slew of security- and networking-centric things on AWS for you. If you run the script and it fails creating the AWS S3 bucket, create a bucket of the same exact name manually and then re-run the script.
When the script is finished you will have Kubernetes up and running and ready for near-production usage (I keep saying "near" and "production-ish" because I'm too new to Kubernetes to know what actually constitutes a real deal productionalized cluster). You will need the AWS CLI installed and configured with a user that has full admin access to your AWS account (it goes ahead and creates IAM roles, etc.).
My game plan will be to:
Get comfortable working with Kubernetes on AWS
Keep hounding the Kubernetes team on Slack to help me understand how Kubernetes works under the hood
Reverse engineer the kube-up.sh script so that I can get Kubernetes running on premise (vCenter)
Blog about this process
Update this answer with a link to said blog.
Give me some time and I'll follow through.

Intergration of Docker with OpenStack via Docker Heat Plugin

I'm trying to integrate Docker with OpenStack (icehouse) via the Docker-Heat Pluigin and I'm facing a problem.
OpenStack is configured according to the tutorial by OpenStack for Ubuntu. I'm using a controller node and a compute node (just the 2 nodes) with the legacy nova-networking.
Things to keep in mind:
Controller Node: 1 network interface - management interface
Compute Node : 2 network interfaces - management interface and the external interface (vm instance have ips of the same subnet of that external interface)
With OpenStack everything works perfect except (which might be the problem I'm facing for dockers)
1- You can't reach (ping) the deployed vm instances from the controller node [makes sense, i think no problem in that one]
2- You can't reach (ping) the deployed vm instances from the compute node (ping: operation not permitted) [might be the issue] - but you can ping from a vm instance to the compute node
3- The virtual machines themselves don't see each others [but i think doesn't have relation to the issue im facing]
For Dockers, the plug-in is installed. I assume perfect since the syntax for Dockers DockerInc::Docker ... is accepted but when I try to run the example posted in the Docker blog - making the adjustments required - the compute instance is created but the docker container is not. Im having this error:
When i try it as a user with admin role
MissingSchema: Invalid URL u'192.168.122.26/v1.9/containers/None/json': No schema supplied. Perhaps you meant http:/ /192.168.122.26/v1.9/containers/None/json
When i try it as a user with just a member role
MissingSchema: Invalid URL u'192.168.122.26/v1.9/containers/create': No schema supplied. Perhaps you meant http:/ /192.168.122.26/v1.9/containers/create
Notes:
192.168.122.26 is the ip of the created vm instance.
I've tried not only with cirros but also coreos and ubunto-precise (same error)
Docker itsself is installed on both Controller and Compute.
Docker plugin and its requirements are only installed on the controller node
Finally, both the controller and the compute nodes run as virtual machines themselves
I would be really glad if you had an idea. Thanks for your time,
Kindest Regards,
M. El Sioufy
My guess is that you haven't allowed communication to the VMs from the outside world (which the controller and/or the compute node will be from the VM's point of view). By default, communications from VMs to the outside world are allowed, but not inbound to the VMs. Try adding an "allow all TCP" rule to the default security group of the tenant that the VMs live in. This may fix your HTTP timeout.

Resources