I have a working shinyproxy app with LDAP authentication. However, for retrieving data from the SQL-database I now use (not recommended) a hardcoded connection string in my R code with the credentials mentioned herein (I use a service user because my end users don't have permissions to query the database):
con <- DBI::dbConnect(odbc::odbc(),
encoding = "latin1",
.connection_string = 'Driver={Driver};Server=Server;Database=dbb;UID=UID;PWD=PWD')
I tried to replace the connection string with an environmental variable, that I pass from my Linux host to the container. This works when running the container outside ShinyProxy, and thus by passing the environmental variables at runtime with the following Docker command:
docker run -it --env-file env.list app123
However, when using ShinyProxy, it is not clear to me how to configure this in the yaml config file. How do I pass the statement --env-file env.list at this level so that it is picked up in the linked containers?
Any help kindly appreciated!
From this closed issue: https://github.com/openanalytics/shinyproxy/issues/99
Your application.yaml could look something like this:
proxy:
title: Open Analytics Shiny Proxy
logo-url: http://www.openanalytics.eu/sites/www.openanalytics.eu/themes/oa/logo.png
landing-page: /
heartbeat-rate: 10000
heartbeat-timeout: 60000
port: 8080
authentication: simple
admin-groups: admin
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admin
# Docker configuration
docker:
internal-networking: true
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-env-file: /app/shinyproxy/test.env
container-env:
bar: baz
access-groups: admin
container-network: shinyproxy_reprex_default
logging:
file:
shinyproxy.log
Specifically it seems you could set environment variables with a file using container-env-file.
Related
We are using S2i Build command in our Azure Devops pipeline and using the below command task.
`./s2i build http://azuredevopsrepos:8080/tfs/IT/_git/shoppingcart --ref=S2i registry.access.redhat.com/ubi8/dotnet-31 --copy shopping-service`
The above command asks for user name and password when the task is executed,
How could we provide the username and password of the git repository from the command we are trying to execute ?
Git credential information can be put in a file .gitconfig on your home directory.
As I looked at the document*2 for s2i cli, I couldn't find any information for secured git.
I realized that OpenShift BuildConfig uses .gitconfig file while building a container image.*3 So, It could work.
*1: https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage
*2: https://github.com/openshift/source-to-image/blob/master/docs/cli.md#s2i-build
*3: https://docs.openshift.com/container-platform/4.11/cicd/builds/creating-build-inputs.html#builds-gitconfig-file-secured-git_creating-build-inputs
I must admit I am unfamiliar with Azure Devops pipelines, however if this is running a build on OpenShift you can create a secret with your credentials using oc.
oc create secret generic azure-git-credentials --from-literal=username=<your-username> --from-literal=password=<PAT> --type=kubernetes.io/basic-auth
Link the secret we created above to the builder service account, this account is the one OpenShift uses by default behind the scenes when running a new build.
oc secrets link builder azure-git-credentials
Lastly, you will want to link this source-secret to the build config.
oc set build-secret --source bc/<your-build-config> azure-git-credentials
Next time you run your build the credentials should be picked up from the source-secret in the build config.
You can also do this from the UI on OpenShift, steps below are a copy of what is done above, choose one but not both.
Create a secret from YAML, modify the below where indicated:
kind: Secret
apiVersion: v1
metadata:
name: azure-git-credentials
namespace: <your-namespace>
data:
password: <base64-encoded-password-or-PAT>
username: <base64-encoded-username>
type: kubernetes.io/basic-auth
Then under the ServiceAccounts section on OpenShift, find and edit the 'builder' service account.
kind: ServiceAccount
apiVersion: v1
metadata:
name: builder
namespace: xxxxxx
secrets:
- name: azure-git-credentials ### only add this line, do not edit anything else.
And finally, edit your build config for the build finding where the git entry is and adding the source-secret entry:
source:
git:
uri: "https://github.com/user/app.git"
### Add the entries below ###
sourceSecret:
name: "azure-git-credentials"
I'm trying to run dagster using celery-k8s and using the examples/celery-k8s as a start. upon running the pipeline from playground I get
Initialization of resources [s3, io_manager] failed.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have configured aws credentials in env variables as mentioned in the document
deployments:
- name: "user-code-deployment-test"
image:
repository: "somasays/dagster-usercode-example"
tag: "0.5"
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/workspace/repo.py"
port: 3030
env:
AWS_ACCESS_KEY_ID: AAAAAAAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
AWS_DEFAULT_REGION: eu-central-1
and I can also see these values are set in the env variables of the pod and can also access the s3 location after pip install awscli and aws s3 ls see the screenshot below the job pod however throws Unable to locate credentials
Please help
The deployment configuration applies to the user code servers. Meanwhile the celery executor runs your pipeline code in separate kubernetes jobs. To provide your secrets there, you will want to configure the env_secrets field of the celery-k8s executor in your pipeline run config.
See https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-k8s/dagster_k8s/job.py#L321-L327 for details on the config.
Background:
I am only able to get past the ansible console install/config tasks by adding --region localhost to anywhere in: /usr/share/eucalyptus-ansible/roles/cloud-post/tasks/console.yml wherever it calls tools that take that argument.
Otherwise each sub task fails like this: ["euca-describe-images: error: connection error (('Connection aborted.', gaierror(-2, 'Name or service not known')))"]
Running the commands from that playbook directly on the euca server being configured gives the same result unless I specify --region localhost
Problem:
I'm stuck here: [cloud-post : update console route53 system domain for eucalyptus-cloud authentication]
Error: "euform-update-stack: error (ValidationError): No updates are to be performed.", "stderr_lines": ["euform-update-stack: error (ValidationError): No updates are to be performed."]
All services are running except the ImagingBackend is Not Ready
No instances are running according to euca-describe-instances
Images are available:
IMAGE ami-5be483c81cf8bd65c eucalyptus-console-image-5-0-823/eucalyptus-console-image-5-0-823.raw.manifest.xml 000216594841 available private x86_64 machine instance-store hvm
TAG image ami-5be483c81cf8bd65c type eucalyptus-console-image
TAG image ami-5be483c81cf8bd65c version 5.0.823
IMAGE ami-f31092ddb73e29af9 eucalyptus-service-image-v5.0.100/eucalyptus-service-image.raw.manifest.xml 000216594841 available privatx86_64 machine instance-store hvm
TAG image ami-f31092ddb73e29af9 provides imaging,loadbalancing
TAG image ami-f31092ddb73e29af9 type eucalyptus-service-image
TAG image ami-f31092ddb73e29af9 version 5.0.100
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
node:
hosts:
exp-enc-[01:02].lan.com:
EDIT:
Solved. Details are in the comments of the marked answer.
The name error most likely means that DNS for the domain cloud.lan.com is not being correctly delegated to your deployment. To test this, check if the nameserver is found:
dig +short NS cloud.lan.com
you should see "ns1.cloud.lan.com" and then should be able to use that nameserver to resolve services, e.g.
dig +short ec2.cloud.lan.com #ns1.cloud.lan.com
which should be the IP of the host for the compute service.
The second item is a bug in the ansible playbook that occurs when the stack is already present and up to date. To work around it, you can either update your playbook or delete the stack before running the playbook. Depending on how far the playbook progressed you may have a script to do this:
/usr/local/bin/console-manage-stack -a delete
the related playbook change is https://github.com/AppScale/ats-deploy/pull/36
I am writing a playbook, which dynamically enters the network servers provided by user at prompt in host file.Further i want to create a task which will ssh on the server one by one and run show version command to fetch the OS of those network devices.
As of now, I am getting the below error:
1."Unable to automatically determine host network os. Please manually configure ansible_network_os value for this host"
I dont want to define the OS beforehand, rather want the playbook to do it.
I tried entering some configurations in host file as below:
[device]
192.1xx.xxx.xx
[all:vars]
ansible_connection=network_cli
ansible_user=user
ansible_ssh_pass=password
ssh_args = -o GSSAPIAuthentication=no
The below plsybook is taking device name from the user and adding it to the file, further it need to ssh on those servers and fetch the OS details:
---
- name: add host dynamically
hosts: localhost
gather_facts: no
vars:
Server: null_val1
vars_prompt:
- name: "Server"
prompt: "Please enter the non-reporting server/IP"
private: no
default: null_val1
pre_tasks:
- name: Save variable
set_fact:
Server: "{{Server}}"
- name: Copying servers to host file
hosts: localhost
become: true
tasks:
- name: copying variable value
lineinfile:
path: "{{playbook_dir}}/hosts.txt"
line: "{{Server}}"
insertafter: '^\[device\]'
state: present
- name: Main execution task
hosts: device
gather_facts: true
tasks:
- name: Run command on devices
cli_command:
command: show version
register: result
- name: display result
debug:
var: result.stdout_lines
The actual result should show me the OS of network device(s) entered in host file by running show version command remotely.
I am able to do following
install salt master, minion (using root user)
login in master machine and execute salt command to install java / tomcat into minion server
result : java/tomcat is installed via root user
What i want to do is
install java / tomcat in minion server by user name 'tomcatuser'
As per my understanding only way of doing this is if i install my minion via tomcatuser.
Is my understanding correct ?
Any other way ?
I think you mix up the saltstack controller and how it control the application configuration.
For salt master and minion to communicate, you need to start both services as root, to control most of the configuration process. Then from there on, you can specify the user and group for application deployment inside your sls configuration.
Now come to your Tomcat/java/whatever package, you can refer to the salt stack configuration, to specify your own user group of the configuration and even startup(with other modification). e.g.
Deploy foo configuration:
file.managed:
- name: /etc/foo.conf
- source:
- salt://foo.conf
- user: foo
- group: users
- mode: 644
Then to startup your tomcat, you can do the similar by using a crontab and specify the user you want (as long as it is not load under service port smaller than 1024) . Or you can check whether salt.states.tomcat is helpful to start the services : https://docs.saltstack.com/en/latest/ref/states/all/salt.states.tomcat.html