Can I install puppet to all my clients(hosts) at once? - unix

I have installed puppet in the master and one of the clients. Now I want to install it in all the 100 servers I have and sign the certificate. I know I can sign the certificates to all at once, but is there a way to install puppet in all the hosts at once?

Several ways:
bake the image
Bake the image with puppet agent installed for these 100 servers.
For example, add shell command yum install -y puppet facter hiera when bake the centos image
refer:
packer.io
packer-template
So if you prepared the image, export to vsphere or generate aws ami image, any instance start with this image will have puppet installed already.
Using automation tools
If these clients are already created and running. Use ansible or any other automation tool to install puppet directly

If you don't want to create image, you can launch bash "post-script" that will be executed just after the start of each instances. See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
Example of AWS CLI call to launch one instance :
ec2-run-instances --key KEYPAIR --user-data-file install.sh <ami_version>
and with this in the install.sh file :
yum install -y puppet facter hiera

Related

Fresh install: httpd.service: Unit not found

Currently I'm trying to follow this guide:
https://marxtudor.com/how-to-install-wordpress-using-ssh-on-centos-vps/
I'm using Google Cloud Platform (free edition to test) and I've created a fresh CentOS 7 VM. The guide above are the first commands I fill in and I keep getting this error:
I've followed so many tutorials, created a new VM and all the time I bump into this error that it doesn't know the httpd command.. I even deleted the project and started all over, but still no luck.
[rsa-key-XXXXXX]$ sudo service httpd restart
Redirecting to /bin/systemctl restart httpd.service
Failed to restart httpd.service: Unit not found.
[rsa-key-XXXXXX]$ httpd -t
-bash: httpd: command not found
[rsa-key-XXXXXX]$
Could anyone please let me know what could be causing this ?
Thanks in advance!
I was also getting the same error, this is how i resolved my issue.
After logging to the machine:
Step 1: Become the root user.
command: sudo su
Step 2: Update Kernal
command: yum update -y
Step 3: Install Apache command: yum install
httpd -y
Step 4: Start Apache command: service httpd start
Step 5: Check Status of Service command: service httpd status
This should solve your problem. good luck
Do you want to install WordPress for your Compute Engine VM instance, using CentOS 7?
If this is the case, you may do so by setting up LAMP for your VM, as described here [1], and then download the WordPress release of your choice [2] and install it on your VM.
I understand that you have successfully set up a VM instance using Centos 7, is this correct? Assuming this, and as you may see from [1], for CentOS 7, these would be the commands to perform this installation:
1) Update and install Apache and PHP:
sudo yum check-update
sudo yum -y install httpd php
2) Start the Apache service:
sudo service httpd start
sudo chkconfig httpd on
3) Install, configure and start DB:
sudo yum -y install httpd mariadb-server php php-mysql
sudo systemctl start mariadb
4) Configure MySQL (set a password for the root user if you want):
sudo mysql_secure_installation
5) Restart Apache
sudo service httpd restart
Once MySQL is set up, you will have to create a database for your WordPress installation.
Following this procedure, you will have Apache, MySQL and PHP installed and running on your Compute Engine VM instance.
Then, you can download the WordPress release of your choice [2], unzip the file and install WordPress by visiting your IP address and the folder where WordPress was downloaded. For example, http://YOUR_PUBLIC_VM_IP_ADDRESS/wordpress.
You will be asked for a database name, the user and password. This will allow WordPress to create the wp-config.php file on your behalf and proceed with the installation.
At this point, you should have WordPress already installed on your Compute Engine VM instance using CentOS 7.
An easier way to install WordPress on Compute Engine VM instances, would be by using the Marketpĺace in the Cloud Platform Console. Go to your Products and Services menu > Marketplace, and search for "Wordpress". You will be presented with many different options to launch WordPress in a Compute Engine VM instance. Nevertheless, it seems that Debian is the deafult OS used for these options.
Links:
[1] https://cloud.google.com/community/tutorials/setting-up-lamp
[2] https://wordpress.org/download/
In my case, I resolved it by looking what actual package name had "httpd" in it.
yum search httpd
It returned httpd.x86_64
Also, later on, when doing sudo service httpd start, I received the notification that PolicyKit1 was needed. So, all up, that command installed the package:
yum install -y httpd.x86_64 polkit-qt.x86_64
service httpd start

Setting Dokku environment variables

I'm trying to set Some variables on Dokku for deployment. As far as i can see from the dev files, one should create a .env file in the directory and put the variables in there. But this is not updating anything
.env file
DOKKU_NGINX_PORT=3000
MYSQL_URL=http://blabla
MYSQL_USER=mysqluser
I'm trying to map the port of the app to port 3000, and inject the mysql vars into the runtime environment.
I know I can set it with dokku config:set on the server, but I want to be able to automate it during deployment.
Any ideas? Or an example?
You'll need to install a Dokku client, or CLI in order to locally interact with the remote application on your Dokku instance.
Here are a few options:
(node.js) dokku-toolbelt
Dokku toolbelt is a node-based CLI wrapper that proxies requests to
the Dokku command running on remote hosts.
You can install it via the following shell command (assuming you have node and npm installed):
$ npm install -g dokku-toolbelt
See documentation here for more information.
(python) dokku-client
Dokku client is an extensible python-based cli wrapper for remote
Dokku hosts.
You can install it via the following shell command (assuming you have python and pip installed):
$ pip install dokku-client
See documentation here for more information.
(ruby) Dokku CLI
Dokku CLI is a rubygem that acts as a client for your Dokku
installation.
You can install it via the following shell command (assuming you have ruby and rubygems installed):
$ gem install dokku-cli
See documentation here for more information.
After the Dokku client is installed locally, make sure that the dokku app remote is set inside the repository directory.
You can verify this by running $ git remote -v.
If the output doesn't show your dokku application instance, set it with the following command:
$ git remote add dokku dokku#example.com:your-app-name
Here's an example from my terminal with some information redacted for security purposes.
seth#linuxmint ~/repos/Adopt-a-Pet $ git remote -v
dokku dokku#example.com:adopt-a-pet (fetch)
dokku dokku#example.com:adopt-a-pet (push)
origin https://github.com/sethbergman/Adopt-a-Pet.git (fetch)
origin https://github.com/sethbergman/Adopt-a-Pet.git (push)
Then you can set environment variables with the following commands:
$ dokku config:set DOKKU_NGINX_PORT=3000
You can optionally set environment variables with the .env file:
$ dokku config:set:file <path/to/.env>
If the .env file is in the root directory of the repository, then the command would be:
$ dokku config:set:file <.env>
If you're using ruby, you can use the gem 'dokku-cli'. With that, you can set config from any file by issuing the command
dokku config:set:file <path/to/file>
See ruby doc

Jenkins CI integrate with NodeJS and Github problems in configuring build

We have build our first Nodejs app and I want to integrate Jenkins as continuous integration we are running node server behind Nginx as proxy and source control in Gitlab. I need example configurations or steps.
I am looking here any doc or wiki link or if you can point me into right direction it will be helpful
I have CentOS server and managed to install and configure Jenkins but not getting the proper way to connect my Gitlab server. I need to run npm commands after each build. If any one already has done that please let me know.
Thanks
Your question is still vague but I will try to provide you here how I had done Jenkins NodeJs with Gitlab integration. I have CentOS 6 and tested.
Steps
Open Java should be installed prior.
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins
sudo service jenkins start
Login as jenkins
sudo -s -H -u jenkins
Now generate the ssh key in the folder /var/lib/jenkins/.ssh and copy that key to gitlab
ssh-keygen
Install Gitlab Hook Plugin and GitLab Plugin in jenkins.
As you will create a project by accessing your jenkins in browser
After creating the project go to configure (left side menu) project page
There lots of options are self explanatory - setup Git repo url
and setup mail git browser url.
Create a new item in jenkins and add the git repo url and in build triggers
select Build when a change is pushed to GitLab. GitLab CI Service URL:
Build Triggers
check the option
Build when a change is pushed to GitLab
Paste that url in your gitlab repo's webhooks in settings.
This is to run npm commands after build
There is one section SSH Publisher
In exec commands section (I have put my example you can write your commands)
cd project_dir
rm -rf public server package.json
tar -xvf projectname.tgz 
ls
npm install --production
export NODE_ENV=production
forever restartall
jasmine-node spec/api/frisbyapi_spec.js
rm -rf projectname.tgz 
I have written most the steps that I took to setup jenkins nodejs and gitlab.
I might have forgot any step. If you face any error please post that as well.

Instance creation in devstack icehouse

I want to create few instance having ubuntu installed on it using openstack.
I tried following steps
Approach 1
installed icehouse devstack
git clone -b stable/icehouse https://github.com/openstack-dev/devstack.git
cd devstack
./stack.sh
after successful installation i uploaded a ubuntu image
glance image-create --name Ubuntu --disk-format iso --container-format bare <~/sumit/images/ubuntu-14.04.2-desktop-amd64.iso
login to dashboard and launch the instance (m1.small, RAM GB, total disk 20GB) using this image.
open the instance console from horizon dashboard and try to install ubuntu
Βut it shows required space(6.5GB) in not available.
Τhen I tried to install neutron and heat also
Approach 2
installed icehouse devstack
git clone -b stable/icehouse https://github.com/openstack-dev/devstack.git
cd devstack
vi localrc
my localrc looks like
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=False
SCREEN_LOGDIR=$DEST/logs/screen
ADMIN_PASSWORD=password
MYSQL_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack
SERVICE_TOKEN=tokentoken
GLANCE_BRANCH=stable/icehouse
HORIZON_BRANCH=stable/icehouse
KEYSTONE_BRANCH=stable/icehouse
NOVA_BRANCH=stable/icehouse
NEUTRON_BRANCH=stable/icehouse
HEAT_BRANCH=stable/icehouse
CEILOMETER_BRANCH=stable/icehouse
DISABLED_SERVICES=n-net ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
ENABLED_SERVICES+=,q-lbaas
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
HEAT_STANDALONE=True
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
After this
./stack.sh
after successful installation Ι uploaded a ubuntu image
glance image-create --name Ubuntu --disk-format iso --container-format bare <~/sumit/images/ubuntu-14.04.2-desktop-amd64.iso
login to dashboard and launch the instance (m1.small, RAM GB, total disk 20GB) using this image.
But now it displays
Error: Unable to connect to Neutron
Every time Ι list the instance it displays same error.
Can anyone help me out to overcome all these problems so that Ι can launch some instances and install ubuntu on that.
Unable to connect can be because neutron service is not running. Through Dashboard you cannot create instance without network. Use screen command in devstack to check if neutron is running properly.

How to deploy Meteor and Phusion Docker to Digital Ocean with Docker?

What is a workflow for deploying to Digital Ocean with Phusion Docker and Node/Meteor support?
I tried :
FROM phusion/passenger-nodejs:0.9.10
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# ssh
ADD private/keys/akey.pub /tmp/your_key
RUN cat /tmp/your_key >> /root/.ssh/authorized_keys && rm -f /tmp/your_key
## Download shit
RUN apt-get update
RUN apt-get install -qq -y python-software-properties software-properties-common curl git build-essential
RUN npm install fibers#1.0.1
# install meteor
RUN curl https://install.meteor.com | /bin/sh
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Enable nginx
# RUN rm -f /etc/service/nginx/down
#setup app
RUN mkdir /home/app/someapp
ADD . /home/app/someapp
WORKDIR /home/app/someapp
EXPOSE 4000
CMD passenger start -p 4000
But nothing is working and then I'm not sure how to really manage update/deploy/running?
E.g, how would you also handle updating the app without rebuilding the docker image?
Here is my suggested workflow:
Create an account on Docker Hub, you can get 1 private repository for free. If you want a complete private repository hosted on your own server, you can run an entire docker registry and use it to host your images.
Create your image on your development machine (locally or on a server), then push the image to the repository using docker push
Update the image when needed and commit your changes with docker commit then push the updated image to your repository (you should properly version and tag all your images)
You can start a digital ocean droplet with docker pre-installed (from applications tab) and simply pull your image and run your container. Whenever you update and push your image from your development machine, simple pull it again from the droplet.
For large and complex infrastructure, I would recommend looking into Ansible to configure your docker containers and manage digital ocean droplet as well.
Be aware that your data will be lost if you stop the container, so consider defining a volume in your container that is mapped to a shared folder on your host machine
I suggest you test your Dockerfile in a local VirtualBox VM. I wrote a tutorial about deploying node.js app with Docker. I build several images (layers) instead of just 1. When you update your app, you just need to rebuild the top layer. Hope it helps. http://vinceyuan.blogspot.com/2015/05/deploying-web-app-redis-postgres-and.html

Resources