How to run ngnix as non root user in redhat - nginx

I am already running ngnix as root user in my redhat server. but I want to run ngnix as non root user now because of some security concerns.
Please help me to do it.
Thanks

youll need an user nginx with the specified rights or you use the existing group www-data. In either your service or your upstart file you need to specifie the user .
service / init
use sudo -u <username> <cmd>
systemd / upstart
There are options like group=
But we will need more informations about your system. is there alrady a service file? A redhat version number? Can you post your service file

Related

NGINX Remote Editing of Configurations

I'm currently running a number of servers, each running NGINX used as reverse proxies to other websites. However, if I need to change a backend IP address or change other variables within NGINX, I need to manually SSH into the server and change the configurations OR log onto NGINX Proxy Manager.
What I'm looking to do is create a central website that will enable me to edit NGINX variables such as 'proxy_pass' and send the updated value to the selected remote server, updating the NGINX config and reloading the service.
Is there any current way to do this and how could I implement that? What comes to mind is some kind of CURL request to the remote server, and then I'm not sure how I'd automatically rewrite the correct portion of NGINX config etc.
Any help would be appreciated!
If you have root access on those servers, all you need is a service or a script that will fill the new values. The simplest way I see fit is to do it with a bash script and a template for the config file.
Template config file: /home/user/nginx_config/nginx.config.sample:
-- your generic config settings
proxy_pass
location /your/location {
proxy_pass {{proxy_pass}};
}
-- rest of standard file
The bash script for filling the template: /home/user/nginx_config/generator.sh
new_ip=$1
template_path="/home/user/nginx_config/nginx.config.sample"
config_path="/etc/nginx/nginx.conf"
if [[ -z $1 ]]
then echo "Missing IP param"; exit;
fi
cp "$config_path" "${config_path}.bak"
sed "s/{{proxy_pass}}/$new_ip/g" "$template_path" > "$config_path"
echo "Done! Updated $config_path file to $1:"
cat "$config_path"
Then, all you need to do is to make a local script to connect using ssh and run the generator script (with 1.2.3.4 as your new IP address)
sshpass -p password ssh -oStrictHostKeyChecking=no -oCheckHostIP=no user#your_server "bash /home/user/nginx_config/generator.sh 1.2.3.4"

Installing Java/Tomcat via SaltStack using non-root user on Ubuntu

I am able to do following
install salt master, minion (using root user)
login in master machine and execute salt command to install java / tomcat into minion server
result : java/tomcat is installed via root user
What i want to do is
install java / tomcat in minion server by user name 'tomcatuser'
As per my understanding only way of doing this is if i install my minion via tomcatuser.
Is my understanding correct ?
Any other way ?
I think you mix up the saltstack controller and how it control the application configuration.
For salt master and minion to communicate, you need to start both services as root, to control most of the configuration process. Then from there on, you can specify the user and group for application deployment inside your sls configuration.
Now come to your Tomcat/java/whatever package, you can refer to the salt stack configuration, to specify your own user group of the configuration and even startup(with other modification). e.g.
Deploy foo configuration:
file.managed:
- name: /etc/foo.conf
- source:
- salt://foo.conf
- user: foo
- group: users
- mode: 644
Then to startup your tomcat, you can do the similar by using a crontab and specify the user you want (as long as it is not load under service port smaller than 1024) . Or you can check whether salt.states.tomcat is helpful to start the services : https://docs.saltstack.com/en/latest/ref/states/all/salt.states.tomcat.html

SELinux Policy to Allow NGINX Access to Parallels Shared Folders on Mac

I'm trying to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop.
Host system: Mac OSX 10.10
Parallels Desktop: 10
Running Virtual OS: CentOS 7 (minimal / command line)
I have the the Parallels tools installed and in CentOS I see the shared folder: /media/psf/Shared-Folder
When I set the Nginx server root to that folder I get a 403 Forbidden. I know it is a configuration parameter that needs editing because if I change SELinux to Permissive, the files are served correctly in NGINX.
When checking how the files are mounted I see this:
root root system_u:object_r:removable_t:s0 /media/psf/Shared-Folder/
I can see the 'removable_t' context - however - my issue is that I cannot seem to find a way to allow the httpd service to serve files that are mounted as removable storage.
I have tried:
chcon -R -t public_content_t /media/psf/Shared_Folder/
chcon -R -t httpd_sys_content_t /media/psf/Development-Projects/
and in all cases I get a "chcon: failed to change context of: '...': Operational not supported" error.
Checking /usr/sbin/getsebool -a | grep http I do not see any option to allow httpd to access removable storage mounts.
Last item: I do not believe I can change the way Parallels mounts the shared folders.
Question: Is there a way to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop?
What you need to do is use semanage.To get it you have to install policycoreutils-python.
The same type of question has already been asked Here. Cheers!

Accessing remote Fuse/Karaf console using SSH

I have a Fuse ESB standalone server running in a RHEL box. I want to connect to the Karaf console remotely to manage the bundles.
If I close my current session, How I go back to my karaf console again ?
I have my Fuse ESB configured to 8101 port for SSH. Will I be able to connect it directly through my SSH client(Putty)
Or Do I need another fuse esb instance locally to access the remote Fuse instance ?
Either ways I am not able to connect, It says access denied. Is there any other easier way to connect to remote fuse/karaf instance ?
Even I tried using Client.sh from bin directory, it says authentication failure. But I have created a JAAS user with Admin role.
By the way, Is just a user is enough to do this ? Or does it need Public/Private key configuration also ?
What is the usual approach for managing the remote Fuse/Karaf instance ?
You can find many details in the JBoss Fuse documentation (eg successor to Fuse ESB) at
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/
And there is a chapter on remote connecting to containers here
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html-single/Configuring_and_Running_JBoss_Fuse/index.html#ESBRuntimeRemote
You need to pass in credentials for a user on the container that is valid and is in the admin role.
The karaf shell also has a jaas command, which allows you to list the users and their roles etc. And as well add new users, etc. You can also do some user management form the FMC web console that is part of Fuse ESB.
You might also want to check your IPtables
http://ask.xmodulo.com/open-port-firewall-centos-rhel.html.
- $ sudo iptables -I INPUT -p tcp -m tcp --dport 8101 -j ACCEPT
- $ sudo service iptables save
- $ service iptables restart
From another karaf instance you can run this command
JBossFuse:karaf#root> ssh -l username -P password -p port hostname
e.g
- JBossFuse:karaf#root> ssh -l smx-P smx -p 8101 10.234.12.12
You have to make sure that the ssh role name that is defined in etc/org.apache.karaf.shell.cfg
# shRole defines the role required to access the console through ssh
#
sshRole = ssh
matches the one in etc/user.properties
#
# This file contains the users, groups, and roles.
# Each line has to be of the format:
#
# USER=PASSWORD,ROLE1,ROLE2,...
# USER=PASSWORD,_g_:GROUP,...
# _g_\:GROUP=ROLE1,ROLE2,...
#
# All users, grousp, and roles entered in this file are available after Karaf startup
# and modifiable via the JAAS command group. These users reside in a JAAS domain
# with the name "karaf".
#
karaf = karaf,_g_:admingroup
_g_\:admingroup = group,admin,manager,viewer,ssh

Configuring Hue With CDH4.3

I am trying to Configure hue with CDH 4.3.I am facing Configuration Error fo HDFS. Its says that "Current value: http://XXX.XX.XX.XXX:50070/webhdfs/v1/ Filesystem root '/' should be owned by 'hdfs'"
But in my case the owner root folder is user, So how can i tell hue that the owner of root folder is user.
You can update DEFAULT_HDFS_SUPERUSER with 'user' but notice that this is not the official recommended way and it might brake things.
Modify the custom "hdfs-site" on HDFS part,
add the below key-value,
hadoop.proxyuser.hue.hosts=*
hadoop.proxyuser.hue.groups=*
Then you should restart the cluster.
I thought may be usermod -a -G root 'user', so the 'user' can get the /root filesystem ACLS, also you need to modify HUE configuration file pseudo-distributed.ini to make sure the Webserver runs as this user part:
server_user='user'
server_group='user'

Resources