Installing Citadel on Openshift - root

Can I install Citadel on Openshift? Installation process requires having root privileges but OpenShift does not provide such service. Is there any workaround? Thanks.

You can not install anything that requires root access. Sometimes you can workaround by installing things to your app-root/data directory.

OCP uses SCC which by default would be responsible to allocate random UID's for your applications pods, however, incase of particular apps needing specific UID or root as in your case, we can use SCC ANYUID instead of default RESTRICTED SCC. However, this is only possible to be done with cluster-admin privileges.

Related

Upgrading Artifactory setup with Remote Repositories

I have an artifactory server, with a bunch of remote repositories.
We are planning to upgrade from 5.11.0 to 5.11.6 to take advantage of a security patch in that version.
Questions are:
do all repositories need to be on exactly the same version?
is there anything else i need to think about when upgrading multiple connected repositories (there is nothing specific about this in the manual)
do i need to do a system-level export just on the primary server? or should i be doing it on all of the remote repository servers
Lastly, our repositories are huge... a full System Export to backup will take too long...
is it enough to just take the config files/dirs
do i get just the config files/dirs by hitting "Exclude Content"
If you have an Artifactory instance that points to other Artifactory instances via smart remote repositories, then you will not have to upgrade all of the instances as they will be able to communicate with each other even if they are not on the same version. With that said, it is always recommended to use the latest version of Artifactory (for all of your instances) in order to enjoy all the latest features and bug fixes and best compatibility between instances. You may find further information about the upgrade process in this wiki page.
In addition, it is also always recommended to keep backups of your Artifactory instance, especially when attempting an upgrade. You may use the built-in backup mechanism or you may manually backup your filestore (by default located in $ARTIFACTORY_HOME/data/filestore) and take DataBase snapshots.
What do you mean by
do all repositories need to be on exactly the same version?
Are you asking about Artifactory instances? Artifactory HA nodes?
Regarding the full system export:
https://www.jfrog.com/confluence/display/RTF/Managing+Backups
https://jfrog.com/knowledge-base/how-should-we-backup-our-data-when-we-have-1tb-of-files/
For more info, you might want to contact JFrog's support.

How to install graphite and all its prerequisites as a user without root permissions?

We are trying to install Graphite to capture neo4j database metrics. The installation will be done under the neo4j user which does not have root permissions. On the web there are multiple pages which detail this procedure but most of them fail at one stage or another. Is there a way to install all components of graphite and its pre-requisites using a non root user?
If you don't have root, then you are most likely not supposed to install applications on that server and should ask you system administrator to install it for you.
That said. On which pre-requisite does it fail? You can install all Python parts in virtualenv. For cairo and other system requirements it's a bit harder but still doable. You'll also have some issues getting it started automatically after reboot.
I'm actually working on making installation easier and updating the documentation. I could use some feedback.
https://github.com/piotr1212/graphite-web/blob/setuptools/docs/install.rst#using-virtualenv <- work in progress, you will have to follow virtualenv -> install from source, but then replace "graphite-project" with "piotr1212" in the github url's and git checkout setuptools in every directory before runnin pip install .

Saving instances on openstack

I have installed openstack by installing devstack environment, where I am finding difficult to save the work after host reboot.However if I install openstack component wise, will it help me in any ways in saving my work after host reboot, and are there any extra benefits of installing openstack component wise
Installing Openstack component wise would certainly enhance your end to end understanding of how services interact with each other. Devstack is an all in one place sort of installation. For better understanding, I'd recommend to install each component manually following the Openstack documentation.
The cause why the vm data is lost after reboot is because you are launched the vm with the ephemeral disk, which will be gone after the vm reboot.
Try to create the instance with root disk, then you will have the permenant disk.

Vagrant shared/synced folders permissions

From my research I understand that VirtualBox synced folders have permissions set up during the mounting process. Later, I am unable to change it therefore permissions for the whole synced folder MUST be same for every single file/folder in the shared folder. When trying to change with or without superuser permissions, changes are reverted straight away.
How this can work with for example Symfony PHP framework where there are several different permissions for different files/folders? (i.e. app/console needs execute rights but I don't want to have 7XX everywhere).
I have found in different but similar question (Vagrant and symfony2) that I could set the permissions to 777 for everything in the Vagrantfile, however this is not desirable as I need to use GIT behind my source code which is than deployed to the live environment. Running everything under 777 in the production is, nicely put, not correct.
How do you people cope with this? What are yours permissions setups?
A possible solution could be using the rsync synced folder strategy, along with the vagrant rsync and vagrant rsync-auto commands.
In this way you'll lose bidirectional sync, but you can manage file permission and ownership.
I am in a similar situation. I started using Vagrant mount options, and found out that as I upgraded parts of my tech stack (Kernel, Virtualbox, Vagrant, Guest Additions) I started getting different behavior while trying to set permissions in synced folders.
At some point, I was perfectly fine updating a few of the permissions in my shell provisioner. At first, the changes were being reflected in the guest and the host. At another point in time, it was being done the way I expected, with the changes being reflected only in the guest and not the host file-system. After updating the kernel and VB on my host, I noticed that permission changes in the guest are being reflected on the host only.
I was trying to use DKMS to compile VBOX against an older version of my Kernel. No luck yet.
Now when I have little more experience, I can actually answer this question.
There are 3 solution to this problem:
Use Git in your host system because vagrant basic shared folders setup somehow forces 777 (at least on Windows hosts)
Use NFS shared folders option from the vagrant (not available on Windows out of the box)
Configure more complex rsync as mentioned in Emyl's answer (slower sync speeds).

What should I do to secure, chmodly wise, my server?

I want to let my friend access my server so he can host his website. Let's call him Joris.
# useradd joris
note that I'm Debian. So now a /home/joris has been created. This is cool and all. BUT. He can
cd /
cd /etc/
cd /var/www/
He can cd pratically everywhere, maybe not delete but he can see everything, which I don't want him to. Is that normal?
First, I would suggest you reading the Debian Administrator's Handbook by either using aptitude install debian-handbook or using a search engine to find a copy online. It covers many topics about security that will be of use to you, especially when sharing a server with multiple users.
As far as being able to access various directories, Debian is VERY relaxed for my tastes with it's default permissions setup. Check the default UMASK settings (/etc/login.defs) so that you can have a more secure setup when adding users.
I o-rx from things like /var/www and grant access to those using Access Control Lists (ACLs). If you are unfamiliar with ACLs I highly recommend you familiarize yourself with them as they are much more robust than the default permissions system.
As far as what all you should protect, that will depend on your setup. Most things in /etc will be self explanatory whether or not you can remove read access for users outside of the owner/group (like your web server configuration directory). You can also use permissions to limit access to specific binaries that users should never have access to, like mysql or gcc.
In the long run your setup will be unique to your specific needs. Reading the Debian Handbook will be immensely helpful in helping you secure your box not only from the outside, but from the inside as well.
Hope this helps point you in the right direction.

Resources