I am creating the instance in openstack with centos 8 hardened image. The configuration script as follows:
#cloud-config
users:
- name: clouduser
password: password
sudo: ['ALL=(ALL) ALL']
groups: sudo
shell: /bin/bash
ssh_pwauth: True
lock_passwd: False
plain_text_passwd: password
runcmd:
- mkdir /run/test
here the user is created and I am able to login the instance but the commands in runcmd is not executed . even the runcmd log in /var/log/cloud-init.log is ran successfully but there is no folder is created in the /run/ folder and /etc/cloud/cloud.cfg is no change (runcmd module in cloud-config and script-user in cloud-finish are there and its executed successfully) but no commands got executed. the same commands if I run inside the instance its working fine. commands in bootcmd is also working but not with runcmd? I can't figure out why it's not being executed?
Related
I am trying to run an ansible playbook from Execute Shell section of Jenkins,But I am getting below errors.I have already install Ansible plugin and configured in global tool configuration section.
+ ansible-playbook -i inventory installapache.yml -vvvv
/tmp/jenkins1596894985578146945.sh: line 4: ansible-playbook: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I am using below commands:
export ANSIBLE_FORCE_COLOR=true
cd /home/ec2-user
ansible-playbook -i hosts /home/ec2-user/installapache.yml -vvvv
I have cloned a cordapp example https://github.com/corda/samples/tree/release-V4/cordapp-example
cd /cordapp-example
./gradlew deployNodes
kotlin-source/build/nodes/runnodes
The example runs correctly.
How do is shutdown the example cordapp and corda node?
Type bye inside each node terminal.
For the above example , update the gradle file to include sshd option as sshdPort , this generates sshd config in node.conf for each node.
sshd {
port=2222
}
Reference - https://docs.corda.net/docs/corda-os/4.4/generating-a-node.html
From your local desktop Login via ssh into Remote Ubuntu machine
ssh -p <port> <ipaddress> -l user1
Reference - https://docs.corda.net/docs/corda-os/4.4/shell.html#the-shell-via-the-local-terminal
And type in run command in corda Crash shell launched
run gracefulShutdown
Reference - https://docs.corda.net/docs/corda-os/4.4/shell.html#shutting-down-the-node
This shuts down the corda node in the remote ubuntu machine.
I'm using gitlab on a rasberry pi model 3 B. Following some information about my setup (sudo gitlab-rake gitlab:env:info):
System information
System: Raspbian 8.0
Current User: git
Using RVM: no
Ruby Version: 2.3.6p384
Gem Version: 2.6.13
Bundler Version:1.13.7
Rake Version: 12.3.0
Redis Version: 3.2.11
Git Version: 2.14.3
Sidekiq Version:5.0.5
Go Version: go1.3.3 linux/arm
GitLab information
Version: 10.6.0-rc3
Revision: 52fa89e
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://gitlab.example.com
HTTP Clone URL: http://gitlab.example.com/some-group/some-project.git
SSH Clone URL: git#gitlab.example.com:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 6.0.3
Repository storage paths:
- default: /mnt/SeagateExpansion/GitLab/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks
Git: /opt/gitlab/embedded/bin/git
After the gitlab update to version 10.6.0 I need to change the url again but when I do the necessary changes in /etc/gitlab/gitlab.rb and run sudo nano gitlab-ctl reconfigure I get the following error messages:
========================================================================
Error executing action `run` on resource 'ruby_block[directory resource:
/mnt/SeagateExpansion/GitLab]'
========================================================================
and
============================================================================
Error executing action `create` on resource
'storage_directory[/mnt/SeagateExpansion/GitLab]'
============================================================================
The result message says:
There was an error running gitlab-ctl reconfigure:
storage_directory[/mnt/SeagateExpansion/GitLab] (gitlab::gitlab-rails line 42) had an error: Mixlib::ShellOut::ShellCommandFailed: ruby_block[directory resource: /mnt/SeagateExpansion/GitLab] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/package/resources/storage_directory.rb line 33) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of chmod 00700 /mnt/SeagateExpansion/GitLab ----
STDOUT:
STDERR: chmod: changing permissions of ‘/mnt/SeagateExpansion/GitLab’: Operation not permitted
---- End output of chmod 00700 /mnt/SeagateExpansion/GitLab ----
Ran chmod 00700 /mnt/SeagateExpansion/GitLab returned 1
So the problem seems to be, that the execution of the run and create command on the resource storage (GitLab folder on the external HDD [HDD = SeagateExpansion]) expects the permissions to be 700, right?
According to this errors I tried to change the permission of
the external HDD folder /mnt/SeagateExpansion/GitLab see the ls -l output:
drwxrwxrwx 1 root GitLabUser 0 Jan 4 17:55 GitLab
With the help of this post I tried to change the permission with the command:
sudo find /mnt/SeagateExpansion/GitLab -type d -exec chmod 700 {} \;
to the required permission 700. But the changes don't take affect. I also tried chmod -R 700 /mnt/SeagateExpansion/GitLab and executed the commands as root but the changes don't take effect. Even after restarting the raspberry pi. What am I doing wrong?
I also tried to change the options settings/flag of the HDD in /etc/fstab to user but this doesn't help ether.
I'm thankful for every hint and answer :).
Best regards,
Bredjo
I finally figured it out. The solution is to change the mount settings in the /etc/fstab. Because if you have the wrong options settings (see: https://en.wikipedia.org/wiki/Fstab) you are not able to change the permissions because its a ntfs filesystem.
So my old fstab entry was this:
UUID=FE820568820526AD /mnt/SeagateExpansion ntfs defaults,gid=GitLabUser 0 0
And the new entry is this:
UUID=FE820568820526AD /mnt/SeagateExpansion ntfs-3g permissions 0 0
Note that you need to install ntfs-3g to use it in fstab. And the permissions options only comes with ntfs-3g. See: https://www.tuxera.com/community/ntfs-3g-advanced/ownership-and-permissions/
After this change I executed again:
sudo gitlab-ctl reconfigure
Now the error disappeared and the permission 700 of the folder /mnt/SeagateExpansion/GitLab could be set. I also noticed that the owner of the GitLab folder was also changed to user git after the reconfiguration:
drwx------ 1 git root 0 Jan 4 17:55 GitLab
That's because I don't need the option gid=GitLabUserany more.
Now everything works again :).
As claimed at their website Gitlab can be used to auto deploy projects after some code is pushed into the repository but I am not able to figure out how. There are plenty of ruby tutorials out there but none for meteor or node.
Basically I just need to rebuild an Docker container on my server, after code is pushed into my master branch. Does anyone know how to achieve it? I am totally new to the .gitlab-ci.yml stuff and appreciate help pretty much.
Brief: I am running a Meteor 1.3.2 app, hosted on Digital Ocean (Ubuntu 14.04) since 4 months. I am using Gitlab v. 8.3.4 running on the same Digital Ocean droplet as the Meteor app. It is a 2 GB / 2 CPUs droplet ($ 20 a month). Using the built in Gitlab CI for CI/CD. This setup has been running successfully till now. (We are currently not using Docker, however this should not matter.)
Our CI/CD strategy:
We check out Master branch on our local laptop. The branch contains the whole Meteor project as shown below:
We use git CLI tool on Windows to connect to our Gitlab server. (for pull, push, etc. similar regular git activities)
Open the checked out project in Atom editor. We have also integrated Atom with Gitlab. This helps in quick git status/pull/push etc. within Atom editor itself. Do regular Meteor work viz. fix bugs etc.
Post testing on local laptop, we then do git push & commit on master. This triggers auto build using Gitlab CI and the results (including build logs) can be seen in Gitlab itself as shown below:
Below image shows all previous build logs:
Please follow below steps:
Install meteor on the DO droplet.
Install Gitlab on DO (using 1-click deploy if possible) or manual installation. Ensure you are installing Gitlab v. 8.3.4 or newer version. I had done a DO one-click deploy on my droplet.
Start the gitlab server & log into gitlab from browser. Open your project and go to project settings -> Runners from left menu
SSH to your DO server & configure a new upstart service on the droplet as root:
vi /etc/init/meteor-service.conf
Sample file:
#upstart service file at /etc/init/meteor-service.conf
description "Meteor.js (NodeJS) application for eaxmple.com:3000"
author "rohanray#gmail.com"
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on shutdown
# Automatically restart process if crashed
respawn
respawn limit 10 5
script
export PORT=3000
# this allows Meteor to figure out correct IP address of visitors
export HTTP_FORWARDED_COUNT=1
export MONGO_URL=mongodb://xxxxxx:xxxxxx#example123123.mongolab.com:59672/meteor-db
export ROOT_URL=http://<droplet_ip>:3000
exec /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/node /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/main.js >> /home/gitlab-runner/erecaho-build/server-alpha-running/meteor.log
end script
Install gitlab-ci-multi-runner from here: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md as per the instructions
Cheatsheet:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-ci-multi-runner
sudo gitlab-ci-multi-runner register
Enter details from step 2
Now the new runner should be green or activate the runner if required
Create .gitlab-ci.yml within the meteor project directory
Sample file:
before_script:
- echo "======================================"
- echo "==== START auto full script v0.1 ====="
- echo "======================================"
types:
- cleanup
- build
- test
- deploy
job_cleanup:
type: cleanup
script:
- cd /home/gitlab-runner/erecaho-build
- echo "cleaning up existing bundle folder"
- echo "cleaning up current server-running folder"
- rm -fr ./server-alpha-running
- mkdir ./server-alpha-running
only:
- master
tags:
- master
job_build:
type: build
script:
- pwd
- meteor build /home/gitlab-runner/erecaho-build/server-alpha-running --directory --server=http://example.org:3000 --verbose
only:
- master
tags:
- master
job_test:
type: test
script:
- echo "testing ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle
- ls -la main.js
only:
- master
tags:
- master
job_deploy:
type: deploy
script:
- echo "deploying ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/programs/server/ && /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/npm install
- cd ../..
- sudo restart meteor-service
- sudo status meteor-service
only:
- master
tags:
- master
Check in above file in gitlab. This should trigger Gitlab CI and after the build process is complete, the new app will be available # example.net:3000
Note: The app will not be available after checking in .gitlab-ci.yml for the first time, since restart meteor-service will result in service not found. Manually run sudo start meteor-service once on DO SSH console. Post this any new check-in to gitlab master will trigger auto CI/CD and the new version of the app will be available on example.com:3000 after the build is completed successfully.
P.S.: gitlab ci yaml docs can be found at http://doc.gitlab.com/ee/ci/yaml/README.html for your customization and to understand the sample yaml file above.
For docker specific runner, please refer https://gitlab.com/gitlab-org/gitlab-ci-multi-runner
configured the salt-stack environment like below:
machine1 -> salt-master
machine2 -> salt-minion
machine3 -> salt-minion
This setup is working for me and I can publish i.e. the command "ls -l /tmp/" from machine2 to machine3 with
salt-call publish.publish 'machine3' cmd.run 'ls - /tmp/'
How it's possible to restrict the commands that are able to be published?
In the currently setup it's possible to execute every command on machine3 and that we would be very risky. I was looking in the salt-stack documentation but unfortunately, I didn't find any example how to configure it accordingly.
SOLUTION:
on machine1 create file /srv/salt/_modules/testModule.py
insert some code like:
#!/usr/bin/python
import subprocess
def test():
return __salt__['cmd.run']('ls -l /tmp/')
if __name__ == "__main__":
test()
to distribute the new module to the minions run:
salt '*' saltutil.sync_modules
on machine2 run:
salt-call publish.publish 'machine3' testModule.test
The peer configuration in the salt master config can limit what commands certain minion can publish, e.g.
peer:
machine2:
machine1:
- test.*
- cmd.run
machine3:
- test.*
- disk.usage
- network.interfaces
This will allow minion machine2 to publish test.* and cmd.run commands.
P.S. Allowing minions to publish cmd.run command is not a good idea generally, just put it here as example.