Cannot for the life of me get Weaviate to launch with OIDC authentication - I have tried every combination under the sun and the pod falls into a CrashLoopBackOff.
I can successfully deploy with the defaults (anon access = true and no admin list); but as soon as OIDC is added, no luck.
Is there something wrong with my config? I add the config as per the documentation to the values.yaml before continuing on with the instructions.
I should note, not at all familiar with Kubernetes and using this as a learning experience.
Thanks so much for any help
YAML (from azure portal):
data:
conf.yaml: |-
---
authentication:
anonymous_access:
enabled: false
oidc:
client_id: client-id
enabled: true
issuer: https://issuerURL/authorize/
skip_client_id_check: false
username_claim: email
authorization:
admin_list:
enabled: true
users:
- user#user.com
query_defaults:
limit: 100
debug: false
For the issuer URL with Azure you will want it to be of the format https://login.microsoftonline.com/xxx-xxx-xxx-xxx/v2.0. You can find this via Azure > App Registrations > Endpoints > OpenID Metadata document (without the .well-known/openid-configuration suffix.
As of Weaviate version 1.15.3, the Weaviate console and python client do not work with Azure but there is an issue to fix this planned for 1.16.
Related
I'm attempting to integrate Airflow with Okta, however there is little documentation available online. I'm referring to a blog article, but I can't seem to get Okta to work.
Blog URL : https://tech.scribd.com/blog/2021/integrating-airflow-and-okta.html
If anyone has used Airflow with Okta, please share your experiences.
In addition, I followed all the steps outlined in Airflow + Okta integration problem OAuth2.0.
I'm having the same problem with access prohibited.
I had a bit of trouble getting this to work but in the end this is what I did:
Installed the following with PIP:
flask-appbuilder==3.4.5
sqlalchemy==1.3.18
authlib==1.0.1
in webserver_config.py
from flask_appbuilder.security.manager import AUTH_OAUTH
AUTH_TYPE = AUTH_OAUTH
AUTH_ROLES_SYNC_AT_LOGIN = True
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Admin"
OAUTH_PROVIDERS = [
{'name': 'okta', 'icon': 'fa-circle-o',
'token_key': 'access_token',
'remote_app': {
'client_id': 'myclientid',
'client_secret': 'myclientsecret',
'api_base_url': 'https://myoktadomain.okta.com/oauth2/v1/',
'client_kwargs': {
'scope': 'openid profile email groups'
},
'access_token_url': 'https://myoktadomain.okta.com/oauth2/v1/token',
'authorize_url': 'https://myoktadomain.okta.com/oauth2/v1/authorize',
'jwks_uri': "https://myoktadomain.okta.com/oauth2/v1/keys"
}
}
]
Have the following settings in my Okta App:
Not shown in the screenshots I have these 2 settings as well:
Sign-in redirect URIs:
https://myairflowurl.com/home
https://myairflowurl.com/admin
https://myairflowurl.com/oauth-authorized
https://myairflowurl.com/login
https://myairflowurl.com/oauth-authorized/okta
maybe we don't all of these???
Initiate login URI:
https://myairflowurl.com/login
As it stands, everyone who authenticates through Okta now gets Admin Access. I believe with some more work we can make use of roles / more granular permissions
I am using Remote URL option which reaches out to remote web server to retrieve needed data. Simply using this,
https://rundeck:test#myserver.com works. However, I would like to pass the password in secure way so...
Option 1 uses 'Secure pass input' and pass is retrieved from key storage, however the password is then not added to the remote URL in
Option 2, which uses Remote URL, https://rundeck:${option.password.value}#myserver.com. My remote servers receives the password as ${option.password.value} and not the actual password value retrieved in Option 1. I understand that Secure Remote Authentication can't be used in Options, however i don't believe I have seen restrictions on what I want to do with Secure † Password input in Rundeck's docs.
Lastly, typing in the password in Secure † Password input option does add the password to the mentioned URL above. I have tested and verified that ${option.password.value} value can be passed in a job's step, that part works. However, it does not appear to work in cascading options.
Currently, secure options values are not expanded as a part of remote options, you can suggest it here (similar to this). Alternatively, you can create a specific custom plugin for that.
Another approach is to design a workflow that uses the HTTP Workflow Step Plugin (passing your secure password as a part of the authentication in the URL) to access the web service + JQ Filter to generate the desired data, then in another/step you can get that data using data variables.
Like this:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 7f34f7ff-c4a3-4616-a2aa-0df491450366
loglevel: INFO
name: HTTP
nodeFilterEditable: false
options:
- name: mypass
secure: true
storagePath: keys/password
valueExposed: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- configuration:
authentication: None
checkResponseCode: 'false'
method: GET
printResponse: 'true'
printResponseToFile: 'false'
proxySettings: 'false'
remoteUrl: https://user:${option.mypass}#myserver.com
sslVerify: 'true'
timeout: '30000'
nodeStep: false
plugins:
LogFilter:
- config:
filter: .
logData: 'true'
prefix: result
type: json-mapper
type: edu.ohio.ais.rundeck.HttpWorkflowStepPlugin
- exec: echo "name 2 is ${data.Name2}"
keepgoing: false
strategy: node-first
uuid: 7f34f7ff-c4a3-4616-a2aa-0df491450366
I had a question related to Airflow v1.10.3. We recently upgraded airflow from v1.9 to v1.10.3. With the new upgrade, we are experiencing a situation where any Celery execute commands coming in from the UI are not getting queued/executed in message broker and celery workers.
Based on Celery FAQ: https://docs.celeryproject.org/en/latest/faq.html#why-is-task-delay-apply-the-worker-just-hanging, it points to authentication issue, user not having the access.
We had web authentication (Google Oauth) in place in version v1.9 using following config:
[webserver]:
authenticate = True
auth_backend = airflow.contrib.auth.backends.google_auth
[google]:
client_id = <client id>
client_secret = <secret key>
oauth_callback_route = /oauth2callback
domain = <domain_name>.com
Will the above config values still work or do we need to set the RBAC=True and provide Google Oauth credentials in webserver_config.py?
Webserver_config.py
from flask_appbuilder.security.manager import AUTH_OAUTH
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Admin"
OAUTH_PROVIDERS = [{
'name':'google',
'whitelist': ['#yourdomain.com'], # optional
'token_key':'access_token',
'icon':'fa-google',
'remote_app': {
'base_url':'https://www.googleapis.com/oauth2/v2/',
'request_token_params':{
'scope': 'email profile'
},
'access_token_url':'https://oauth2.googleapis.com/token',
'authorize_url':'https://accounts.google.com/o/oauth2/auth',
'request_token_url': None,
'consumer_key': '<your_client_id>',
'consumer_secret': '<your_client_secret>',
}
}]
Any help is very much appreciated. Thanks.
From my experience, both will work. Of course, as they call the FAB-based UI, the "new UI", the old one will probably be killed off.
Your problem, though, doesn't sound like it has anything to do with user authentication, but celery access. It sounds like airflow and/or celery are not reading celery_result_backend or one of the other renamed options, when they should.
Search for Celery config in their UPDATING document for the full list.
I am trying to authenticate Google Datastore c# SDK in a k8 pod running in google cloud.
I could not find any way to inject the account.json file in to DatastoreDb or DatastoreClient beside using GOOGLE_APPLICATION_CREDENTIALS environment variable.
Using GOOGLE_APPLICATION_CREDENTIALS environment variable is problematic since i do not want to leave the account file exposed.
According to the documentations in: https://googleapis.github.io/google-cloud-dotnet/docs/Google.Cloud.Datastore.V1/index.html
When running on Google Cloud Platform, no action needs to be taken to
authenticate.
But that does not seem to work.
A push in the right direction will be appreciated (:
I'd suggest using a K8s secret to store the service account key and then mounting it in the pod at run time. See below:
Create a service account for the desired application.
Generate and encode a service account key: just generate a .json key for the newly created service account from the previous step and then encode it using base64 -w 0 key-file-name. This is important: K8S expects the secret's content to be Base64 encoded.
Create the K8s secret manifest file (see content below) and then apply it.
apiVersion: v1
kind: Secret
metadata:
name: your-service-sa-key-k8s-secret
type: Opaque
data:
sa_json: previously_generated_base64_encoding
Mount the secret.
volumes:
- name: service-account-credentials-volume
secret:
secretName: your-service-sa-key-k8s-secret
items:
- key: sa_json
path: secrets/sa_credentials.json
Now all you have to do is set the GOOGLE_APPLICATION_CRENDENTIALS to be secrets/sa_credentials.json.
Hope this helps. Sorry for the formatting (on a hurry).
This is how it can be done:
var credential =
GoogleCredential.FromFile(#"/path/to/google.credentials.json").CreateScoped(DatastoreClient.DefaultScopes);
var channel = new Grpc.Core.Channel(DatastoreClient.DefaultEndpoint.ToString(), credential.ToChannelCredentials());
DatastoreClient client = DatastoreClient.Create(channel, settings:
DatastoreSettings.GetDefault());
DatastoreDb db = DatastoreDb.Create(YOUR_PROJECT_ID, client:client);
// Do Datastore stuff...
// Shutdown the channel when it is no longer required.
await channel.ShutdownAsync();
Taken from: https://github.com/googleapis/google-cloud-dotnet/blob/master/apis/Google.Cloud.Datastore.V1/Google.Cloud.Datastore.V1/DatastoreClient.cs
Trying to get spring cloud gateway to load balance across a couple of instances of our application, but just can't figure it out. We don't have a service registry at present (no Eureka etc).
I've been trying to use ribbon and have a configuration like so:
spring:
application:
name: gateway-service
cloud:
discovery:
locator:
enabled: true
gateway:
routes:
- id: my-service
uri: lb://my-load-balanced-service
predicates:
- Path=/
filters:
- TestFilter
ribbon:
eureka:
enabled: false
my-load-balanced-service:
ribbon:
listOfServers: localhost:8080, localhost:8081
However when I try a request to the gateway, I get a 200 response with content-length 0, and my stubs have not been hit.
I have a very basic setup, no beans defined.
How can I get ribbon to play nice / or an alternative?
You should check out whether spring-cloud-starter-netflix-ribbon dependency is on your project or not