Override sanity check when running Plone buildout? - plone

I'm running Ubuntu 12.04 LTS with an apache server for my Jira/Confluence application.
Now I need to additionally install an instance of Plone (production).
But port 8080 is already taken by Jira and until now I couldn't find working instructions to change this.
I followed these instructions to install plone:
http://developer.plone.org/getstarted/ubuntu_production.html
Do I have to take care of the port during these instructions?
I have found this site (2.5. Creating New Instances): http://plone.org/documentation/manual/installing-plone/referencemanual-all-pages where it says you have to change some settings in buildout.cfg. But even as sudoer I can't run these instructions. I get this:
buildout.sanitycheck:
***********************************************************
Buildout should not be run while superuser. Doing so allows
untrusted code to be run as root.
Instead, you probably wish to do something like:
sudu -u plone_buildout bin/buildout
If you have a good reason to bypass this restriction,
remove the buildout.sanitycheck extension from your buildout.
***********************************************************
While:
Installing.
Loading extensions.
Error: User attempt to give system ownership to Internet
*************** PICKED VERSIONS ****************
[versions]
*************** /PICKED VERSIONS ***************
But how can I remove the sanity check? I can't find it in this file.

We've got multiple issues here.
Changing Ports
To change the port Plone attaches to, edit buildout.cfg and look for the lines:
[instance]
<= instance_base
recipe = plone.recipe.zope2instance
http-address = 8080
Change '8080' to the desired port. If this is a ZEO configuration, look for 'client#' parts instead and change their port numbers. Choose ports > 1024. After editing, run buildout.
Running Buildout
If you used sudo to run the Unified Installer, that caused it to create plone_buildout and plone_daemon system users. The "plone_buildout" user is meant to be used to run buildout, and owns the code files. The "plone_daemon" user is meant to be used to run the long-lived processes that connect to the Internet, and it owns the data.
This scheme is carefully contrived so that you do not have to run buildout as root, and so that the long-lived daemon processes will have (close to) minimum privileges. Under this scheme, you run buildout as the plone_buildout user, generally with the command:
sudo -u plone_buildout bin/buildout
The command "sudo -u username" causes the rest of the command line to be executed under the effective ownership of the specified user.
It is generally a very, very bad idea to run buildout as root. That's why the sanity check exists. Running buildout as root means that you are giving control of your system to the author of every setup.py file in every module downloaded by buildout. Don't do it.
A note on a common misconception: The Unified Installer, when run as root, via sudo, does not run buildout as root (at least not in any recent version). It uses root privileges to create a plone_buildout user, then runs buildout as that user.

Just remove buildout.sanitycheck of base.cfg.
As the docs also instruct to sudo after a root-install, the warning makes only sense in a non-root-install.

Related

How to use/enable PHP extensions for CLI on swisscomdev/cloudfoundry php_buildback?

** EDIT **
For several cases we need to call Symfony commands on a deployed CloudFoundry app. Symfony commands are php scripts which are called with the PHP CLI.
One example is bin/console doctrine:schema:update (but could be user generation, cache clearing etc.)
So for our app we need both, fpm and cli enabled. This is done with:
"PHP_MODULES": [
"fpm",
"cli"]
in options.json.
After connecting to the app with cf ssh I change to app directory and I call php/bin/php doctrine:schema:update this results in a ClassNotFound: PDO issue.
During staging these commands are called successfully.
I checked that for PHP CLI the PDO extension is not available (by checking php -i) although I have mentioned it in options.json.
"PHP_EXTENSIONS": [
...
"pdo",
"pdo_mysql",
...]
How to enable extensions for CLI and FPM on one app? And is it theoretically possible to have different extensions for CLI and FPM and as well different user-php.ini s to fully/particularly override php.ini of CLI and FPM?
So for our app we need both, fpm and cli enabled. This is done with:
"PHP_MODULES": [ "fpm", "cli"]
in options.json.
This is something that we should probably clean up in the build pack. I do not believe it's (PHP_MODULES) actually used any more.
Maybe a year or more ago, the build pack switched how it downloads PHP. It would previously download individual components modules & extensions. Now it just downloads everything at once. This actually ends up being faster since it's one larger download vs many smaller downloads, and bandwidth is generally very fast for build pack downloads.
Worth mentioning that while PHP_EXTENSIONS no longer triggers what to download it is still used in terms of what extensions get enabled in php.ini. Thus you still need to set that or indicate extensions through composer.
After connecting to the app with cf ssh
I believe that this is the issue. You need to source the build pack env variables so that the env is configured properly.
Ex:
vcap#359b74ff-686c-494e-4a1e-46a9c420f262:~$ php
bash: php: command not found
vcap#359b74ff-686c-494e-4a1e-46a9c420f262:~$ HOME=$HOME/app source app/.profile.d/bp_env_vars.sh
vcap#359b74ff-686c-494e-4a1e-46a9c420f262:~$ php -v
PHP 5.6.26 (cli) (built: Oct 28 2016 22:24:22)
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
Staging does this automatically as does runtime for your app. Unfortunately cf ssh does not.
UPDATE:
A slightly easier way to do this is to run cf ssh myapp -t -c "/tmp/lifecycle/launcher /home/vcap/app bash ''". This will open a bash shell and it lets the lifecycle launcher handle sourcing & setting up the environment.
And is it theoretically possible to have different extensions for CLI and FPM and as well different user-php.ini s to fully/particularly override php.ini of CLI and FPM?
Sure. By default, we download and install all extensions. Thus you just need a different php.ini (or some other setting to enable that extension) in which you enable your alternate set of extensions.
When you cf ssh into the container, you could copy the existing php.ini somewhere else and edit it for your CLI needs. Then reference that php-alt.ini when you run your CLI commands.
Never did this but does enabling the php cli in PHP_MODULES (https://docs.developer.swisscom.com/buildpacks/php/gsg-php-config.html) help?

Running meteor on linux server

I am trying to get my localhost working on my remote (mediatemple) server.
I have bundled it up and have a /myurl.com/bundle folder with the following files.
this folder contains
main.js
npm-debug.log
programs
server
How do I get this to run?
You should take a look in the README inside the bundle folder. Normally everything ist described there to start your app.
Make sure that NODEJS and MONGO is installed on your remote server. This is NOT included in your bundle as well as NODEJS is not present.
If you are running a system like debian or ubuntu normally you can do the installation with
apt-get install nodejs mongo
Make sure, that the nodejs has release v0.10.36 or v0.10.38
node --version
At the README you see the necessary ENV-VARS like MONGO_URL and PORT you need to set to start your meteor app.
If you have running a apache server already the PORT 80 is already blocked, so try PORT=3000 to start your meteor app.
Example:
MONGO_URL='mongodb://localhost:27017/yourapp' ROOT_URL="http://yourhost" PORT=3000 node main.js
If using as above you do not need to export the ENV-VARS before start
Sometime when starting, there are missing NPM – you get fiber errors
In that case
cd programs/server
npm install
and the try start again.
Good luck
Tom
(I'm writing this response assuming that you are not worried about scalability issue, respond in comment if you want to scale your app)
The best option for running a node application, which Meteor application is, is by using forever.
npm install forever
forever start simple-server.js
If you want to figure out how to see the log files and how to stop/restart your service, you can run forever --help to see all the commands.

How to correctly install dokku - with or without sudo?

I'm learning dokku right now for simple web deployment. Offical install instructions state this command:
wget -qO- https://raw.github.com/progrium/dokku/v0.3.12/bootstrap.sh | sudo DOKKU_TAG=v0.3.12 bash
I'm not a devop or admin, but as far as I understand this line, it performs all bootstrapping and installation under the root account, thanks to sudo. So dokku will be checked out into a directory with root access rights, and all additional directories like /var/lib/dokku/ will also have root access rights.
The problem is - all articles across the internet about dokku instructs to execute dokku command or do dokku-related actions without sudo. For example, instructions about this dokku database plugin, https://github.com/krisrang/dokku-mariadb, instructs to install it via:
cd /var/lib/dokku/plugins
git clone https://github.com/krisrang/dokku-mariadb mariadb
dokku plugins-install
This is not working, since /var/lib/dokku/plugins have root access rights and git clone will fail with acces denied. It's hard to be a non-admin nowadays, but maybe someone will hint what I'm doing wrong? Do I need to install dokku some other way, or all dokku-related tutorials across internet assume that I'm executing them under root (which is, by my limited admin knowledge, highly not recommended for security reasons).
You should run those three commands as sudo:
sudo su -
The dokku binary will run code as the dokku user even if you execute as root. So it should be fine to run that as is. Once you are the sudo user, just run the install instructions listed in your question. Hope my answer helps ! :)
I also contacted them as they mentioned:
In the future, we'll have a method to install plugins directly with a
dokku command
As far as I can tell, you need to run it as root. A traditional way to install a program without root-privileges is to download the source and compile it, which can be done by running:
git clone https://github.com/progrium/dokku.git
make
make install
Dokku's makefile depends on apt-get, which requires root access to run.
I'm not familiar with dokku or dokku-mariadb, but I think the author of dokku-mariadb also assumes root access.
For people running into the question on wether its fine to install through root user (on fresh created VMs as per the guide), try checking this Github issue:
https://github.com/dokku/dokku/issues/961
Since the commands related to dokku are prefixed with # rather than $, it means that its not necessary to run them from non-root user. It also makes writing suddo unnecessary (and form my experience counterproductive).

Recompile Nginx with additional modules

I installed Nginx via apt-get on Debian a while ago, and I've got a couple of sites live on it. Now I need to install some additional modules, and as I don't want to mess anything up I'd like to double check my process before I perform it. Hopefully this will also help others that are unsure about this part.
As I've understood it I have to do the following to minimize the downtime:
Download the source for Nginx
Add the additional modules with ./configure --additional-module
Compile Nginx with make
Stop the current server (service nginx stop)
Install Nginx with make install
Start the new server (service nginx start)
Or do I have to uninstall Nginx first, as it's not compiled from source at this point?
Having done something similar on Ubuntu before, the installation should overwrite the existing nginx binaries with the newly compiled ones, so long as yes, you ensure nginx isn't running on the system at the time.
I'd recommend trying to install nginx elsewhere on the system, so in case you can't get it to work quickly, you can restart your web server with the old nginx binaries and not have significant downtime.
nginx -V - helpful command which shows options for .\configure which was used to make nginx, which is actually working.
Helpful to get detail imagination about.
apt-get source nginx - to get source
install will automatically substitute actual installed version by new one
Keep also in mind that some nginx-modules can require additional libs on server. geoip module is classical example of it

Plone Command not Found when starting plone

I have just installed plone on debian squeeze without problems. I am trying to start with "plone /usr/local/Plone/zeocluster/bin/plonectl start" and I receive Command not found.
Are there any paths I need to export? "which plone" gives me nothing.
Did I miss something?
You don't need the "plone" at the beginning of your command, but you probably do need "sudo".
Try sudo /usr/local/Plone/zeocluster/bin/plonectl start
Because Plone is a server built with Python, there is no special plone command.
Presumably you used the Plone Unified installer, creating a ZEO installation. Because it was installed in /user/local/Plone I am also assuming you installed it as root.
Information on how to run Plone after installation is found on the Installation Quick Guide (under "Last steps"); you simply run the command ./bin/plonectl start, or, with your full path: /usr/local/Plone/zeocluster/bin/plonectl start.
If you are not logged in as root still, you'll need to run that command with sudo; the server will automatically switch to the dedicated plone user installed by the Unified Installer.

Resources