I am trying to Configure hue with CDH 4.3.I am facing Configuration Error fo HDFS. Its says that "Current value: http://XXX.XX.XX.XXX:50070/webhdfs/v1/ Filesystem root '/' should be owned by 'hdfs'"
But in my case the owner root folder is user, So how can i tell hue that the owner of root folder is user.
You can update DEFAULT_HDFS_SUPERUSER with 'user' but notice that this is not the official recommended way and it might brake things.
Modify the custom "hdfs-site" on HDFS part,
add the below key-value,
hadoop.proxyuser.hue.hosts=*
hadoop.proxyuser.hue.groups=*
Then you should restart the cluster.
I thought may be usermod -a -G root 'user', so the 'user' can get the /root filesystem ACLS, also you need to modify HUE configuration file pseudo-distributed.ini to make sure the Webserver runs as this user part:
server_user='user'
server_group='user'
Related
I am trying to set up SELinux and an encrypted additional partition that I mount at startup using a systemd service.
If I run SELinux in permissive mode, everything runs ok (partition is correctly mounted, data can be accessed and service runs properly).
If I run SELinux in enforcing mode (enforcing=1), I am not able to mount such partition with the error:
/dev/mapper/temporary-cryptsetup-1808: chown failed: Permission denied
sh[1777]: Failed to open temporary keystore device.
sh[1777]: Command failed with code 5: Input/output error
Any ideas to fix that?
Audit2allow does not return any additional rules to be added
Solved assigning to cryptsetup the lvm_exec_t context.
In the lvm.fc file cryptsetup was defined as /bin/cryptsetup but I had to change it to /usr/sbin/cryptsetup where it actually was.
I'm working with the docker image "mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019". I've noticed that the default user for windowsservercore is ContainerAdministrator. If I try to run the image with the user ContainerUser (docker run -u ContainerUser mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019) I get the following error: ERROR: Failed to stop or query status of service 'w3svc' error [80070005].
I think that the error is related to the permissions that the user needs to run ServiceMonitor. So, first of all, is it correct to assume that windowsservercore images must run with ContainerAdministrator and cannot run with ContainerUser?
If the assumption above is correct I would like to confirm if running the container with ContainerAdministrator can expose the container to a security issue. As far as I understand even if the ServiceMonitor.exe is started with ContainerAdministrator the external-facing process is the IIS Windows service, which runs under a local account in IIS_IUSRS group. So even if an attacker could compromise the application it will not have administrator access to the container. Can anyone confirm if this is correct?
ContainerAdministrator is a special virtual account.ContainerAdministrator is the default account when you run a container – so if your CMD instruction starts a console app, that app will run as ContainerAdministrator. If your app runs in the background as a Windows Service, then the account will be the service account, so ASP.NET apps run under application pool accounts.
you could refer to the below link:
Accessing the Docker host pipe inside windows container with non-admin user
I was in the same position you are. I can't confirm your assumption (though I assume the same). But I can provide our dockerfile which enabled us to run as non root (to comply with an AKS policy).
dockerfile
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2019
SHELL ["cmd", "/S", "/C"]
# username = '1000' so the k8s policy can verify it's a non-root user
RUN net user /add 1000
# We copy some config files in the actual startup.ps1 so we need write access here
RUN icacls C:\inetpub\wwwroot\ /grant 1000:(OI)(CI)F /t
# ServiceMonitor.exe puts some environment variables in the applicationHost.config
RUN icacls C:\Windows\System32\inetsrv\Config\ /grant 1000:(OI)(CI)F /t /c
# S-1-5-32-545 is group Builtin\Users which contains user 1000. Allows user to restart the w3svc service
RUN sc.exe sdset w3svc D:(A;;CCLCSWRPWPDTLOCRRC;;;SY)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA) (A;;CCLCSWLOCRRC;;;IU)(A;;CCLCSWLOCRRC;;;SU)(A;;RPWPDTLO;;;S-1-5-32-545)
COPY startup.ps1 /
WORKDIR /inetpub/wwwroot
ARG source=obj/Docker/publish
COPY ${source} .
USER 1000
ENTRYPOINT ["powershell", "/startup.ps1"]
startup.ps1
# ContainerAdministrators doesn't have these variables, but the
# custom account does have them. If this gets put into the applicationHost.config
# iis will try to write to the user specific temp directory, and fail (with an unrelated error)
Remove-Item env:TMP
Remove-Item env:TEMP
C:/ServiceMonitor.exe w3svc
I am already running ngnix as root user in my redhat server. but I want to run ngnix as non root user now because of some security concerns.
Please help me to do it.
Thanks
youll need an user nginx with the specified rights or you use the existing group www-data. In either your service or your upstart file you need to specifie the user .
service / init
use sudo -u <username> <cmd>
systemd / upstart
There are options like group=
But we will need more informations about your system. is there alrady a service file? A redhat version number? Can you post your service file
I have SLS files set up to copy things from a network folder to a local directory on a minion.
Looks a little like this:
cmd-test:
cmd.run:
- name: 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E'
and get the following output:
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Tuesday, December 6, 2016 10:50:35 AM
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Getting File System Type of Source \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Source - \\<Server>\<program>\<folder>\
Dest : C:\<path>\<folder>\
Files : *.*
Options : *.* /S /E /DCOPY:DA /COPY:DATS /PURGE /MIR /NP /R:1 /W:1
------------------------------------------------------------------------------
NOTE : NTFS Security may not be copied - Source may not be NTFS.
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Accessing Source Directory \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Waiting 1 seconds... Retrying...
When I run the same thing locally in command line as 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E' and it worked perfectly. I have no idea how to fix this 'use local user account' that Robocopy seems to give when using it through salt.
I also tried adding /MIR and /SEC which didnt't work.
Running Windows 10, Minion 2016.3.3
Master: Red Hat, 2016.3.3
Salt seems to be connecting to the network resource with a computer account. A few possible solutions:
Try changing the Salt Service on the Client (if that's how salt is executing the commands) to run as a domain user.
Try using the salt file server
Implement this hacky workaround where a scheduled task is created - discussed in the github issue that seems related to your problem: https://github.com/saltstack/salt/issues/16340
Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/