How to decrypt encrypted Travis variables? - encryption

Travis allows one to encrypt variables in the configuration scripts:
https://docs.travis-ci.com/user/environment-variables/#Defining-encrypted-variables-in-.travis.yml
https://docs.travis-ci.com/user/encryption-keys
(1) If you do not have the travis gem installed, run gem install travis.
(2) In your repository directory, run:
travis encrypt MY_SECRET_ENV=super_secret --add env.matrix
(3) Commit the changes to your .travis.yml.
After encrypting these variables, how do I find the "identity" of these variables once encrypted? Is this possible?
Let's say several months later I forget what the encrypted variables are---how do I find these out again?

Related

How to manage dotfiles across multiple environments with different users?

Scenario: I want my Zsh and Oh-My-Zsh setup to be the same across my personal Mac, my work Mac, my Linux Desktop, my raspberry pis.
Each of these have different usernames (and even different paths to their home directory /Users/MyUserName for Mac and /home/MyUserName for Linux.
I tried creating a git repo for my .zshrc and created some basic scripts that git pull all my plugins but the problems arose when i tried to install on a new raspberry pi and noticed that the path to my home directory depended on my system and .oh-my-zsh install script uses the ZSH environmental variable to install itself. This meant that i needed to create a pre-oh-my-zsh .zshrc that detected the system with uname -s and set the prefix for the ZSH variable appropriately.
Unfortunately .oh-my-zsh just overwrites this so whenever I would want to make edits to my config and push it the git repo i'll have to re-install each time. It seems like there must be a solution.
How do I make it so my zsh dotfiles are agnostic to my machine environment and my username for paths so that I can install .oh-my-zsh and make updates to my dotfiles that I can propagate to my other machines?
I would recommend keeping your config dotfiles in a single git config repo but then creating symlinks to where they ought to live on your machine. Properties of symlinks are such that they are more maintainable than copying and pasting configs around.
They allow updates to be made either in the config-repo directory or where you created the symlink.
For example, I would recommend creating a config repo with your .zshrc file and cloning it to all relevant machines. Next, I would create a symlink for your .zshrc from your config repo, ~/dev/config/.zshrc, to its default location, the home directory on all your machines.
cd ~
ln -s ~/dev/config/.zshrc .
Now if that git managed .zshrc file is ever updated on one machine the changes can be pulled on another via the config repo and the .zshrc file symlinked to the home directory is also updated. And visa versa if ~/.zshrc is updated the changes are reflected in the config repo change log and can easily be committed and pushed for other machines to pull.
This setup also works for your oh-my-zsh config. I would recommend following the standard install procedure on each machine thanks to it essentially cloning from source into at own local git directory. However, the .oh-my-zsh/custom directory is what you'll want to be in sync for your custom functions, aliases, templates, plugins, etc.
cd ~/.oh-my-zsh
rm -rf custom
ln -s ~/dev/config/oh-my-zsh/custom .
This solution enables tracking your zsh and oh-my-zsh configs in version control for maintainability and enables interoperability between machines. This can of course be extended to any type of config files/directories.
Note: all of this could of course be automated with a provisioning config tool like puppet, anasable, or chef. Furthermore, the management of the symlinks could be assisted with a solution such as Stow.
You should use a dotfiles manager, there are many different ones.
I would recommend yadm
You can see an example in my dotfiles repo:
https://github.com/khongi/dotfiles

How to build owncloud in windows?

I've read the manual of building the client for windows and skipped the steps one by one to step 4.
but unfortunately i'm not familiar with qt and cmake and the manual doesn't explain it as well
can anyone tell me how i should skip the rest of steps?
i don't know how to skip this step :
set PATH=C:<OpenSSL Install Dir>\bin;%PATH%
set PATH=C:<qtkeychain Clone Dir>;%PATH%
https://doc.owncloud.org/desktop/2.3/building.html#windows-development-build
These commands are to add ahead of your PATH the location of OpenSSL binaries folder and QT Keychain clone folder.
It's suppose you have this two tools already installed on your system and here you are just telling where to find them.

How to use/enable PHP extensions for CLI on swisscomdev/cloudfoundry php_buildback?

** EDIT **
For several cases we need to call Symfony commands on a deployed CloudFoundry app. Symfony commands are php scripts which are called with the PHP CLI.
One example is bin/console doctrine:schema:update (but could be user generation, cache clearing etc.)
So for our app we need both, fpm and cli enabled. This is done with:
"PHP_MODULES": [
"fpm",
"cli"]
in options.json.
After connecting to the app with cf ssh I change to app directory and I call php/bin/php doctrine:schema:update this results in a ClassNotFound: PDO issue.
During staging these commands are called successfully.
I checked that for PHP CLI the PDO extension is not available (by checking php -i) although I have mentioned it in options.json.
"PHP_EXTENSIONS": [
...
"pdo",
"pdo_mysql",
...]
How to enable extensions for CLI and FPM on one app? And is it theoretically possible to have different extensions for CLI and FPM and as well different user-php.ini s to fully/particularly override php.ini of CLI and FPM?
So for our app we need both, fpm and cli enabled. This is done with:
"PHP_MODULES": [ "fpm", "cli"]
in options.json.
This is something that we should probably clean up in the build pack. I do not believe it's (PHP_MODULES) actually used any more.
Maybe a year or more ago, the build pack switched how it downloads PHP. It would previously download individual components modules & extensions. Now it just downloads everything at once. This actually ends up being faster since it's one larger download vs many smaller downloads, and bandwidth is generally very fast for build pack downloads.
Worth mentioning that while PHP_EXTENSIONS no longer triggers what to download it is still used in terms of what extensions get enabled in php.ini. Thus you still need to set that or indicate extensions through composer.
After connecting to the app with cf ssh
I believe that this is the issue. You need to source the build pack env variables so that the env is configured properly.
Ex:
vcap#359b74ff-686c-494e-4a1e-46a9c420f262:~$ php
bash: php: command not found
vcap#359b74ff-686c-494e-4a1e-46a9c420f262:~$ HOME=$HOME/app source app/.profile.d/bp_env_vars.sh
vcap#359b74ff-686c-494e-4a1e-46a9c420f262:~$ php -v
PHP 5.6.26 (cli) (built: Oct 28 2016 22:24:22)
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
Staging does this automatically as does runtime for your app. Unfortunately cf ssh does not.
UPDATE:
A slightly easier way to do this is to run cf ssh myapp -t -c "/tmp/lifecycle/launcher /home/vcap/app bash ''". This will open a bash shell and it lets the lifecycle launcher handle sourcing & setting up the environment.
And is it theoretically possible to have different extensions for CLI and FPM and as well different user-php.ini s to fully/particularly override php.ini of CLI and FPM?
Sure. By default, we download and install all extensions. Thus you just need a different php.ini (or some other setting to enable that extension) in which you enable your alternate set of extensions.
When you cf ssh into the container, you could copy the existing php.ini somewhere else and edit it for your CLI needs. Then reference that php-alt.ini when you run your CLI commands.
Never did this but does enabling the php cli in PHP_MODULES (https://docs.developer.swisscom.com/buildpacks/php/gsg-php-config.html) help?

OpenShift Custom Cartridge and NPM

I am working with a community-developed OpenShift cartridge for nginx. The cartridge's build script (without any modifications) works well; it starts the nginx server with the configuration file that I provide it. However, I am trying to modify the build script so that it first changes directory into my OpenShift repository, runs npm install and then grunt build to build an Angular application that I have created.
When I do this, I continuously get the error EACCES, mkdir '/var/lib/openshift/xxxxxxxxxx/.npm' when the script gets to npm install. Some OpenShift forum posts have attempted to solve the issue, but it appears as though a different solution is required (at least in my case).
Thus, I am interested in whether or not it is possible to use npm in this way, or if I need to create a cartridge that does all of this myself.
Since we do not typically have the access required to create ~/.npm, we have to find ways of moving the npm cache (normally ~/.npm) and the npm user configuration (normally ~/.npmrc) to accessible folders to get things going. The following information comes partially from a bug report that I submitted to Redhat on this matter.
We must begin by creating an environmental variable to control the location of .npmrc. I created a file (with shell access to my application) called .env in $OPENSHIFT_DATA_DIR. Within this file, I have placed:
export NPM_CONFIG_USERCONFIG=$OPENSHIFT_HOMEDIR/app-root/build-dependencies/.npmrc
This moves the .npmrc directory to a place where we have the privileges to read/write. Naturally, I have to also create the directory .npmrc in $OPENSHIFT_HOMEDIR/app-root/build-dependencies/. Then, in my pre-start webhook/early in my build script, I have placed:
touch $OPENSHIFT_DATA_DIR/.env
This ensures that the environmental variable that configures the location of .npmrc will be accessible each time we deploy/build. Now we can move the location of the npm cache. Start by running touch on the .env file manually, and create the .npm directory in $OPENSHIFT_HOMEDIR/app-root/build-dependencies/. Run the following to complete the reconfiguration:
npm config set cache $OPENSHIFT_HOMEDIR/app-root/build-dependencies/.npm
NPM should now be accessible each time we deploy, even if we are not using the NodeJS cartridge. The above directory choices may be changed as desired.
You do not have write access to the ~/.npm directory in your gear. You might try reviewing how the native node.js cartridge is setup (https://github.com/openshift/origin-server/tree/master/cartridges/openshift-origin-cartridge-nodejs) and see if you can apply it to your custom cartridge.

Have configure.ac But Not autoconf. Can I Generate Configure Without It?

I'm trying to build curl...specifically libcurl...on my Android device; I've built OpenSSL and have cloned the repo. Unfortunately the curl sources use buildconf, which requires autoconf, and I don't have autoconf installed.
Is there an alternate way to generate the configure script and/or the Makefile from the included configure.ac and Makefile.in?
The source tarballs provided by the curl project include generated configure scripts, no need for autoconf then! You can get release versions or daily snapshots from curl.haxx.se.
The configure script is generally generated with the ./buildconf script in the curl source code root directory and it requires autoconf, automake and libtool to be installed.

Resources