Add OpenID users to Open Distro Kibana - kibana

I've configured opendistro_security for OpenID. When I attempt to authenticate a user, it fails. Presumably because that user has no permissions. How do I give permissions an openid user? I can't seem to find an obvious way to do so with the internal_user.yml.

I Solved it. For posterity, here's what needed to do in addition to the openis settings in the Kibana.yml File.
1: In the config.yml file on each of my Elasticsearch nodes I needed to add the following:
authc:
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: openid
challenge: false
config:
subject_key: email
roles_key: roles
openid_connect_url: https://accounts.google.com/.well-known/openid-configuration
authentication_backend:
type: noop
Since I'm using google as my identity provider I needed to make sure my subject_key was "email"
2: Needed to run security config script on each node:
docker exec -it elasticsearch-node1 /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl && docker exec -it elasticsearch-node2 /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl
3: I needed to configure the usersthat I want to have access admin access to a role:
all_access:
reserved: false
backend_roles:
- "admin"
users:
- "name#email.com"
description: "Maps an openid user to all_access"
Now I can assign other users from Kibana

Related

Enabling account deletion on nats server

I was trying to prune some users from my nats server by doing:
nsc push --system-account SYS -u nats://localhost:4222 -P
but I got the following error:
server nats-comm-2 responded with error: delete accounts request by SOME_KEY_VALUE failed - delete must be enabled in server config
The meaning of the error is pretty obvious, when I examine the help documentation for nsc push -P:
Only works with nats-resolver enabled nats-server. Mutually exclusive of account-removal/diff
But I'm not sure how to enable this in my nats server config. How do I allow for account pruning?
I found documentation in the resolver section, here, showing that I could add allow_delete: true to the config, but as the YAML format is in camel-case, I had to modify it to be allowDelete: true instead.
nats:
auth:
enabled: true
resolver:
type: full
allowDelete: true

Passing SQL credentials to shinyproxy app

I have a working shinyproxy app with LDAP authentication. However, for retrieving data from the SQL-database I now use (not recommended) a hardcoded connection string in my R code with the credentials mentioned herein (I use a service user because my end users don't have permissions to query the database):
con <- DBI::dbConnect(odbc::odbc(),
encoding = "latin1",
.connection_string = 'Driver={Driver};Server=Server;Database=dbb;UID=UID;PWD=PWD')
I tried to replace the connection string with an environmental variable, that I pass from my Linux host to the container. This works when running the container outside ShinyProxy, and thus by passing the environmental variables at runtime with the following Docker command:
docker run -it --env-file env.list app123
However, when using ShinyProxy, it is not clear to me how to configure this in the yaml config file. How do I pass the statement --env-file env.list at this level so that it is picked up in the linked containers?
Any help kindly appreciated!
From this closed issue: https://github.com/openanalytics/shinyproxy/issues/99
Your application.yaml could look something like this:
proxy:
title: Open Analytics Shiny Proxy
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8080
authentication: simple
admin-groups: admin
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admin
# Docker configuration
docker:
internal-networking: true
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-env-file: /app/shinyproxy/test.env
container-env:
bar: baz
access-groups: admin
container-network: shinyproxy_reprex_default
logging:
file:
shinyproxy.log
Specifically it seems you could set environment variables with a file using container-env-file.

how to create a user with a random password?

I installed cloud-init in openstack's image(centos-7),so how can I create a user with random password after instance launched (public key will also inject in this user)?
I wold not prefer copy script in instance launching panel, thank you all...
There are options to generate the password for the in-built users using cloud-init:
Option-1: Using OpenStack horizon
If the user is using horizon to launch the instance then for the post-configuration by providing the config as:
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image simply by replacing the user.
Option-2: Using OpenStack heat template
Using the openstack heat template by providing the user-data as below:
heat_template_version: 2015-04-30
description: Launch the RHEL VM with a new password for cloud-user and root user
resources:
rhel_instance:
type: OS::Nova::Server
properties:
name: 'demo_instance'
image: '15548f32-fe27-449b-9c7d-9a113ad33778'
flavor: 'm1.medium'
availability_zone: zone1
key_name: 'key1'
networks:
- network: '731ba722-68ba-4423-9e5a-a7677d5bdd2d'
user_data_format: RAW
user_data: |
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image.
You can replace the rhel and rheladmin with your desired passwords.

Login password for ubuntu, RHEL, any cloud image

OpenStack cloud Images:
There are multiple cloud images which are available at https://docs.openstack.org/image-guide/obtain-images.html. In order to login to the VMs once those are deployed is either by using ssh key pair or password. But there are images where the sshkeypairlogin is disabled and there is no in-built password by default, then how to login to these VMs where the user have only information on the user-name
There are options to generate the password for the in-built users using cloud-init:
Option-1: Using OpenStack horizon
If the user is using horizon to launch the instance then for the post-configuration by providing the config as:
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image simply by replacing the user.
Option-2: Using OpenStack heat template
Using the openstack heat template by providing the user-data as below:
heat_template_version: 2015-04-30
description: Launch the RHEL VM with a new password for cloud-user and root user
resources:
rhel_instance:
type: OS::Nova::Server
properties:
name: 'demo_instance'
image: '15548f32-fe27-449b-9c7d-9a113ad33778'
flavor: 'm1.medium'
availability_zone: zone1
key_name: 'key1'
networks:
- network: '731ba722-68ba-4423-9e5a-a7677d5bdd2d'
user_data_format: RAW
user_data: |
#cloud-config
chpasswd:
list: |
cloud-user:rhel
root:rheladmin
expire: False
Here the passwords are generated for cloud-user and root users of RHEL image. The same is used for any user of any image.

Installing Java/Tomcat via SaltStack using non-root user on Ubuntu

I am able to do following
install salt master, minion (using root user)
login in master machine and execute salt command to install java / tomcat into minion server
result : java/tomcat is installed via root user
What i want to do is
install java / tomcat in minion server by user name 'tomcatuser'
As per my understanding only way of doing this is if i install my minion via tomcatuser.
Is my understanding correct ?
Any other way ?
I think you mix up the saltstack controller and how it control the application configuration.
For salt master and minion to communicate, you need to start both services as root, to control most of the configuration process. Then from there on, you can specify the user and group for application deployment inside your sls configuration.
Now come to your Tomcat/java/whatever package, you can refer to the salt stack configuration, to specify your own user group of the configuration and even startup(with other modification). e.g.
Deploy foo configuration:
file.managed:
- name: /etc/foo.conf
- source:
- salt://foo.conf
- user: foo
- group: users
- mode: 644
Then to startup your tomcat, you can do the similar by using a crontab and specify the user you want (as long as it is not load under service port smaller than 1024) . Or you can check whether salt.states.tomcat is helpful to start the services : https://docs.saltstack.com/en/latest/ref/states/all/salt.states.tomcat.html

Resources