I am not able to Launch a VM/instance in proxmox from Salt-master - salt-stack

I have installed Saltsatck(salt-master) on my virtual machine and also installed Proxmox(Cloud) on another virtual machine.
They both are on same network.
Salt-master and proxmox are running successfully.
Whenever I run the below command :-
# salt-cloud -p my-proxmox-config mytest
I get the following output :-
[INFO ] salt-cloud starting
[INFO ] Starting new HTTPS connection (1): 192.168.2.245
[INFO ] Creating Cloud VM mytest
[ERROR ] Error creating mytest on PROXMOX
The following exception was thrown when trying to run the initial deployment:
Error: There was a profile error: Failed to deploy VM
Please look at the below config files :-
1. /etc/salt/cloud.providers.d/proxmox.conf
proxmox-config:
user: root#pam or root#pve
password: oodles
url: 192.168.2.245
driver: proxmox
verify_ssl: False
minion:
master: 192.168.2.228
2. /etc/salt/cloud.profiles.d/proxmox.conf
my-proxmox-config :
provider: proxmox-config
image: /root/ISO/ubuntu-14.04-server-amd64.iso
technology: kvm / Openvz
host: cloud
ip_address: 192.168.2.245
ssh_username: root
password: oodles
cpus: 1
memory: 512
swap: 512
disk: 2
nameserver: 8.8.8.8 8.8.4.4
Please suggest/advice me what to correct from my configurations file .
Thanks

The error you're getting is saying that something is wrong with your profile config. We just need to troubleshoot what's going on with it.
I haven't used the proxmox provider, but according to https://docs.saltstack.com/en/latest/topics/cloud/proxmox.html it looks like for the image option you might have to use local:/root/ISO/ubuntu-14.04-server-amd64.iso.
Also, have you tried just technology: openvz?

I am able to solve the above issue i.e. now I am able to launch a VM/instance in proxmox from salt-master by doing the below configurations :-
1. /etc/salt/cloud.providers.d/proxmox.conf
proxmox-config:
minion:
master_type: standard
master: '192.x.x.x'
user: 'root#pam'
password: "your password"
url: '192.168.x.x'
port: '8006'
driver: proxmox
verify_ssl: False
2. /etc/salt/cloud.profiles.d/proxmox.conf
my-proxmox-config :
provider: proxmox-config
image: local:vztmpl/ubuntu-12.04-standard_12.04-1_i386.tar.gz
technology: openvz
host: cloud
ip_address: 192.168.x.x
ssh_username: root
password: "your password"
cpus: 1
memory: 512
swap: 512
disk: 2
nameserver: 8.8.8.8 8.8.4.4
In the above file, the image option will only work if you have downloaded the desired operating ISO in templates option available in PROXMOX GUI.
Now , you can easily launch a Instance by using below command :-
# salt-cloud -p my-proxmox-config mytest
Thanks

Related

AKS Network Policy: Why the "host" could not be resolved when adding network policy?

I have kubernetes Cluster deployed on Azure (AKS). On that cluster i have a wordpress deployed with helm. And Azure mariadDB which is accessible to Worpress via External Service Object:
My External service look like :
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: app
spec:
type: ExternalName
externalName: somename.mariadb.database.azure.com
I defined network policy on my Wordpress chart. And the values.yml look like:
# Azure MariaDB infos
externalDatabase:
#host: 10.0.4.68 # IT WORKS FINE WHEN I PUT MARIADB IP
host: mysql # IT DOES NOT WORK WHEN I PUT EXTERNAL SERVICE OBJECT NAME NOR THE FOLLOWING ENFPOINT: somename.mariadb.database.azure.com
port: 3306
database: bitnami_wordpress
networkPolicy:
enabled: true
ingressRules:
accessOnlyFrom:
enabled: true
customRules:
- {}
egressRules:
customRules:
- to:
- ipBlock:
cidr: 10.0.4.64/28 # THE VSUBNET OF MARIADB DATABASE
ports:
- protocol: TCP
port: 3306
When i replace externalDatabase.host with the IP of MariaDB it works fine. But when replace it with the external service object name (ie: mysql which is the 1st manifest) or with the endpoint (ie: somename.mariadb.database.azure.com ) i got the follwing error:
wordpress 15:35:09.41 DEBUG ==> Executing SQL command:
SELECT 1
ERROR 2005 (HY000): Unknown server host 'somename.mariadb.database.azure.com' (-3)
wordpress 15:35:34.43 DEBUG ==> Executing SQL command:
PS: the above error is when i set externalDatabase.host to somename-dev.mariadb.database.azure.com which is the same error as when externalDatabase.hostset to mysql
Any help please

Dokku postgres: Expose command bug: `The container name "/dokku.postgres.APP_NAME.ambassador" is already in use by container`

$ dokku postgres:expose wiki-fashion-hasura
docker: Error response from daemon: Conflict. The container name "/dokku.postgres.wiki-fashion-hasura.ambassador" is already in use by container "05ac13c5682af1b1334ffda6d9142c2e577c81f0776c9a0449516d5ca6d55c8d". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
I checked docker ps and there is no container 05ac13c5682af1b1334ffda6d9142c2e577c81f0776c9a0449516d5ca6d55c8d
Then when trying to expose again:
$ dokku postgres:expose wiki-fashion-hasura
! Service wiki-fashion-hasura already exposed on port(s) 729
$ dokku postgres:info wiki-fashion-hasura
=====> wiki-fashion-hasura postgres service information
Config dir: /var/lib/dokku/services/postgres/wiki-fashion-hasura/data
Data dir: /var/lib/dokku/services/postgres/wiki-fashion-hasura/data
Dsn: postgres://postgres:03baa499ae71ae371a9276536df5fa56#dokku-postgres-wiki-fashion-hasura:5432/wiki_fashion_hasura
Exposed ports: 5432->729
Id: 89aa118cd1a41fc28170f6de3ed236171d3f3e2d8c019c62f74b2381282284f9
Internal ip: 172.17.0.8
Links: wiki-fashion-hasura
Service root: /var/lib/dokku/services/postgres/wiki-fashion-hasura
Status: running
Version: postgres:12
But
telnet <HOST> 729
telnet: connect to address <HOST>: Connection refused
It isn't exposed. (other ports with this same IP are resolving)
How can I debug this further?

ASP NET Core: Aliases in docker-compose don't appear in /etc/hosts

I have a Docker supported ASP NET Core app.
The docker-compose file looks like this:
version: '3'
services:
test:
image: test
build:
context: ./Test
dockerfile: Dockerfile
networks:
test_nw:
aliases:
- test_alias
oracledb:
image: sath89/oracle-12c
ports:
- "1521:1521"
networks:
test_nw:
aliases:
- oracledb_alias
networks:
test_nw:
But after starting the app I looked in the container of the ASP.NET Core app (docker exec -it ... bash) and checked the /etc/hosts file but the respective alias of the DB oracledb_alias does not appear in it. So the app does not find the DB when using oracledb_alias as host name in the connection string.
What did I do wrong? How do I solve this problem?
You did nothing wrong. Docker's earlier versions used to use /etc/hosts for resolving hostname and links. Now docker uses a internal DNS server for this.
So you don't get to see any information as such. Then only thing you can do is use a command and test if you can reach resolve the name or not
$ dig oracledb_alias
$ ping oracledb_alias
$ telnet oracledb_alias 1521
See the below link for more details
https://docs.docker.com/engine/userguide/networking/configure-dns/

salt-cloud minion install/config on windows VM

I'm trying to test salt-cloud with vmWare/vCenter and all is really good so far. However it appears the minion is not being installed on the VM. I have been digging around and the only settings I find are
http://salt-cloud.readthedocs.io/en/latest/topics/windows.html
my-softlayer:
provider: softlayer
user: MYUSER1138
apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9'
minion:
master: saltmaster.example.com
**win_installer: /root/Salt-Minion-0.17.0-AMD64-Setup.exe
win_username: Administrator
win_password: letmein**
Does this mean, I need to have the windows installer in /root for salt-cloud to run? I thought setting deploy: True would do the install. Below is my config. my vlan is using dhcp, so I am getting a good IP and all.
cloud.profile.d/test.conf
windows-test:
provider: vcenter
clonefrom: 'Win2K12'
num_cpus: 1
memory: 2GB
devices:
network:
Network adapter 1:
name: vlan
adapter_type: vmxnet3
switch_type: distributed
cluster: cluster
datastore: datastore
folder: 'OS Testing'
power_on: True
deploy: True
customization: False
win_username: Administrator
win_password: password
minion:
master: salt
EDIT
I do have to specify the installer location. That seems to work. The problem now is trying to get pywinrm/Windows Remote Management to work. For some reason salt-cloud is trying to connect on 5986, but looking at the VM I see WRM is listening on 5985. So I'm wondering if it's a pywinrm setting now???

How to see Nginx default page using vagrant docker provider?

I try to run my Nginx server using vagrant docker provider like:
vagrant up
The Vagrantfile instructions are:
# Specify Vagrant version and Vagrant API version
Vagrant.require_version ">= 1.6.0"
VAGRANTFILE_API_VERSION = "2"
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
# Create and configure the Docker container(s)
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.network "private_network", ip: "192.168.66.66"
config.vm.provider "docker" do |docker|
docker.name = 'nginx-container'
docker.image = "nginx:latest"
docker.ports = ['80:80', '443:443']
end
end
If I check status of vagrant with vagrant status I get:
Current machine states:
default running (docker)
The container is created and running. You can stop it using
`vagrant halt`, see logs with `vagrant docker-logs`, and
kill/destroy it with `vagrant destroy`.
When I try to get http://192.168.66.66/ page, I get ERR_CONNECTION_TIMED_OUT and page not load. Why I don't see Nginx default web page?
The logs during vagrant up in console are:
==> default: Docker host is required. One will be created if necessary...
default: Docker host VM is already ready.
==> default: Syncing folders to the host VM...
default: Installing rsync to the VM...
default: Rsyncing folder: /Users/victor/www/symfony/ => /var/lib/docker/docker_1430638235_29519
==> default: Warning: When using a remote Docker host, forwarded ports will NOT be
==> default: immediately available on your machine. They will still be forwarded on
==> default: the remote machine, however, so if you have a way to access the remote
==> default: machine, then you should be able to access those ports there. This is
==> default: not an error, it is only an informational message.
==> default: Creating the container...
default: Name: nginx-container
default: Image: nginx:latest
default: Volume: /var/lib/docker/docker_1430638235_29519:/vagrant
default: Port: 80:80
default: Port: 443:443
default:
default: Container created: b798ea3309612fb2
==> default: Starting container...
==> default: Provisioners will not be run since container doesn't support SSH.
This is several months old, but I'll answer anyway for the sake of other people that might land here:
Use docker ps to see your image identifier, should be 'b798ea3309612fb2'
and then do:
docker inspect b798ea3309612fb2 | grep IPAddress
So you'll confirm the IP address.
Since you're exposing the ports, you should see them in your REAL (whatever container you're using for Vagrant) IP.
Make sure there is no firewall blocking them.

Resources