I'm using Google OAuth2 to authenticate in Airflow Version:1.10.2. If I set up a user as Admin, they can see all DAGs on the /home page, as expected, and also manipulate them as well. If I set them up as a 'User', they still see all of the DAGs, no matter what I set the "owner": to in the default_args within the DAG. The user does not have the ability to edit/run any of the DAGs however. I expected the DAGs to be filtered by only those owned by the user, based on the filter_by_owner environment variable referenced below.
I've noticed that the actual username associated with my company generated Google account is something like 'google_xxxxxxxxxxxxx', and I've tried that as well as 'FirstName LastName', and just 'FirstName'.
I have the following variables set on the Airflow server:
AIRFLOW__WEBSERVER__RBAC=true
ENV AIRFLOW__WEBSERVER__FILTER_BY_OWNER=true
I'm using the FAB webserver_config.py file to set the auth type:
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "User"
Any thoughts on what I should try, or what I might be missing?
#LoganX. I'm trying to set my Google Auth with Airflow in RBAC, but I'm missing something.
Can you post an example of how you configured your webserver_config.py? I think there is something wrong with my OAUTH_PROVIDERS section.
Related
Finally, I (un)successfully deployed my new app. I had many errors on production like, I was used tutorial to implement login/rest/register part and there was an error that login.js not correctly import signInWithEmailAndPassword from ./firebase so I just import it from import {signInWithEmailAndPassword} from "firebase/auth"; as is should. I had another error like Error [FirebaseError]: Firebase: Error (auth/invalid-api-key). on but I just change my variable to be in quotes "NEXT(...) " .
But now I deployment everything and it should work perfect, but when I try to login in I see an alert Firebase: Error (auth/api-key-not-valid.-please-pass-a-valid-api-key.).
and do not know what I suppost to do to fix it.
Not only login form do not work, but also do not work any of other section of application. I can fetch date etc.
btw. i depoleyd on Vercel.
btw. it all work perfect on local
update.
I updated variable in verbal environmental variable like
NEXT_PUBLIC_MEASUREMENT_ID but there is no difference. In documentation is written that it should be ````NEXT_PUBLIC_VERCEL_MEASUREMENT_ID``` but this approach do not work.
in my env.local:
NEXT_PUBLIC_VERCEL_API_KEY=
in my env
NEXT_PUBLIC_VERCEL_API_KEY=
(it is in gitignore)
in Vercel server all variable are the same
NEXT_PUBLIC_VERCEL_API_KEY=
To solve my problem with no access to firebase in production I had to do many things:
correctly set variables
set up authorised domain
ad1
npm i -g vercel
Vercel env add NAME
then connect to variable and select what kind of variable it is (using space).
variable should look like NEXT_PUBLIC_KEY_API="..." and used as process.env.NEXT_PUBLIC_API_KEY (without "...").
then Vercel --prod
ad 2 Authentication => settings =>.
Authorized domains and add domain.
I have started using Apache Airflow (Built upon FAB) and enabled authentication through OAuth (as shown in FAB Security). I have ONLY one OAUTH_PROVIDER (azure).
FYI, Airflow version 2.1.4
When I launch my airflow home page, it used to open the direct login page of my OAUTH_PROVIDER.
NOW, the real problem started when I upgraded my airflow to 2.2.4 AND configured the same OAUTH (Azure) provider.
When I launch my airflow home page, a page coming like this
After clicking the button "Sign In with azure", the user login page comes.
Compared to the older airflow, the latest version got an extra page.
Why is matter to me?
We are rendering airflow in a web app and this extra "sign in with" page does not look good.
Please provide some info on SKIPPING that extra interaction.
After upgrading Airflow, Flask Appbuilder version was probably also bumped and that caused change in behavior. Basically:
Flask Appbuilder < 3.4.0 made it possible to have automatic sign-in when just one oauth provider was configured
Flask Appbuilder >= 3.4.0 changed the behavior and made it impossible to achieve this. This is the PR (with justification) that removed this functionality: https://github.com/dpgaspar/Flask-AppBuilder/pull/1707
If you want previous behavior to be used in your deployment, this is what most likely would work (although I didn't test it):
Prepare custom view class, let's call it CustomAuth0AuthView - it would bring back old behavior on login method. (I used v4.1.3 as reference code and modified it slightly).
webserver_config.py:
from flask_appbuilder.security.views import AuthOAuthView
class CustomAuthOAuthView(AuthOAuthView):
#expose("/login/")
#expose("/login/<provider>")
#expose("/login/<provider>/<register>")
def login(self, provider: Optional[str] = None) -> WerkzeugResponse:
log.debug("Provider: {0}".format(provider))
if g.user is not None and g.user.is_authenticated:
log.debug("Already authenticated {0}".format(g.user))
return redirect(self.appbuilder.get_url_for_index)
if provider is None:
if len(self.appbuilder.sm.oauth_providers) > 1:
return self.render_template(
self.login_template,
providers=self.appbuilder.sm.oauth_providers,
title=self.title,
appbuilder=self.appbuilder,
)
else:
provider = self.appbuilder.sm.oauth_providers[0]["name"]
log.debug("Going to call authorize for: {0}".format(provider))
state = jwt.encode(
request.args.to_dict(flat=False),
self.appbuilder.app.config["SECRET_KEY"],
algorithm="HS256",
)
try:
if provider == "twitter":
return self.appbuilder.sm.oauth_remotes[provider].authorize_redirect(
redirect_uri=url_for(
".oauth_authorized",
provider=provider,
_external=True,
state=state,
)
)
else:
return self.appbuilder.sm.oauth_remotes[provider].authorize_redirect(
redirect_uri=url_for(
".oauth_authorized", provider=provider, _external=True
),
state=state.decode("ascii") if isinstance(state, bytes) else state,
)
except Exception as e:
log.error("Error on OAuth authorize: {0}".format(e))
flash(as_unicode(self.invalid_login_message), "warning")
return redirect(self.appbuilder.get_url_for_index)
Prepare custom security manager. Let's call it CustomSecurityManager. It would use your custom view to handle login.
webserver_config.py:
from airflow.www.security import AirflowSecurityManager
class CustomSecurityManager(AirflowSecurityManager):
authoidview = CustomAuthOAuthView
Configure Airflow to use your CustomSecurityManager
webserver_config.py:
SECURITY_MANAGER_CLASS = CustomSecurityManager
More on this:
https://flask-appbuilder.readthedocs.io/en/latest/security.html#your-custom-security
https://developer.squareup.com/blog/secure-apache-airflow-using-customer-security-manager/
I had a question related to Airflow v1.10.3. We recently upgraded airflow from v1.9 to v1.10.3. With the new upgrade, we are experiencing a situation where any Celery execute commands coming in from the UI are not getting queued/executed in message broker and celery workers.
Based on Celery FAQ: https://docs.celeryproject.org/en/latest/faq.html#why-is-task-delay-apply-the-worker-just-hanging, it points to authentication issue, user not having the access.
We had web authentication (Google Oauth) in place in version v1.9 using following config:
[webserver]:
authenticate = True
auth_backend = airflow.contrib.auth.backends.google_auth
[google]:
client_id = <client id>
client_secret = <secret key>
oauth_callback_route = /oauth2callback
domain = <domain_name>.com
Will the above config values still work or do we need to set the RBAC=True and provide Google Oauth credentials in webserver_config.py?
Webserver_config.py
from flask_appbuilder.security.manager import AUTH_OAUTH
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Admin"
OAUTH_PROVIDERS = [{
'name':'google',
'whitelist': ['#yourdomain.com'], # optional
'token_key':'access_token',
'icon':'fa-google',
'remote_app': {
'base_url':'https://www.googleapis.com/oauth2/v2/',
'request_token_params':{
'scope': 'email profile'
},
'access_token_url':'https://oauth2.googleapis.com/token',
'authorize_url':'https://accounts.google.com/o/oauth2/auth',
'request_token_url': None,
'consumer_key': '<your_client_id>',
'consumer_secret': '<your_client_secret>',
}
}]
Any help is very much appreciated. Thanks.
From my experience, both will work. Of course, as they call the FAB-based UI, the "new UI", the old one will probably be killed off.
Your problem, though, doesn't sound like it has anything to do with user authentication, but celery access. It sounds like airflow and/or celery are not reading celery_result_backend or one of the other renamed options, when they should.
Search for Celery config in their UPDATING document for the full list.
I am trying to create a dynamic link using the Ruby SDK. I believe I have everything right, but I'm getting a
Google::Apis::ServerError: Server error
When creating the URL
Could you help me figure out what I'm missing/doing wrong or if this is a Google issue ?
Assuming I have generates Oauth credentials requesting the appropriate scopes, I am doing
request = ::Google::Apis::FirebasedynamiclinksV1::CreateManagedShortLinkRequest.new(
dynamic_link_info: ::Google::Apis::FirebasedynamiclinksV1::DynamicLinkInfo.new(
domain_uri_prefix: Rails.application.secrets.firebase_dynamic_link_prefix,
link: campaign.linkedin_url,
),
suffix: ::Google::Apis::FirebasedynamiclinksV1::Suffix.new(
option: 'SHORT',
),
# name: "Linkedin acquisition URL of #{camp.utm_campaign_name} for #{camp.contractor.name} <#{camp.contractor.email}>",
name: "Test of generation",
)
# => <Google::Apis::FirebasedynamiclinksV1::CreateManagedShortLinkRequest:0x000021618baa88
# #dynamic_link_info=#<Google::Apis::FirebasedynamiclinksV1::DynamicLinkInfo:0x000021618bad80
# #domain_uri_prefix="https://example.page.link",
# #link="https://www.example.com/?invitation_code=example&signup=example&utm_campaign=example&utm_medium=example&utm_source=example">,
# #name="Test of generation",
# #suffix=#<Google::Apis::FirebasedynamiclinksV1::Suffix:0x000021618babf0
# #option="SHORT">
# >
link_service.create_managed_short_link(request)
def link_service
#link_service ||= begin
svc = ::Google::Apis::FirebasedynamiclinksV1::FirebaseDynamicLinksService.new
svc.authorization = oauth_service.credentials
svc
end
end
I know OAuth scopes seem to be working as previously I was getting
Google::Apis::ClientError: forbidden: Request had insufficient authentication scopes.
But I fixed it after increasing OAuth scopes to cover firebase. Also, my request seems correct, as when I try to omit one of the parameters (like the name) I'm getting appropriate validation errors like
Google::Apis::ClientError: badRequest: Created Managed Dynamic Link must have a name
My only clue, is that the create_managed_short_link actually takes more parameters. In the example given above, I also have substituted our real firebase prefix by example but I do own the real firebase prefix I am using, and link generation directly from the Firebase frontend console actually works.
I've updates my google sdk to the most recent version up to date
- google-api-client-0.30.3
Unfortunately generating managed short links through the REST API is not currently supported.
As stated here by someone who works(ed) in the dynamic links team itself.
For now we can only use CreateShortDynamicLinkRequest, however this endpoint does not allow to specify a custom_suffix (i.e. https://example.com/my-custom-suffix)
I'm writing some cron job in python for Openstack that should read server ids from a database and then get the servers from the API by using the python-novaclient.
In pseudo code things should work like this:
session = login_to_keystone(user="admin", password="something") #or use a token
nova_client = get_nova_client(session)
#the servers array holds dictionaries with server, user and tenant ids as strings
# e.g. {"server_id": "1-2-3", "tentant_id": "456", user_id: "11111-2222"}
for server in servers:
server_obj = nova_client.servers.get(server.server_id)
...do stuff with server_obj (read data from it, delete,...)...
What I've come up with is the following, but it's not right as I get a EndpointNotFound exception. I'm using Devstack with Juno.
from keystoneclient.v2_0 import client as keystone_client
from keystoneclient import session
from novaclient import client as nova_client
#the url is the admin endpoint
keystone = keystone_client.Client(token="my-admin-token",
auth_url="http://192.168.1.1:35357/v2.0",
endpoint="http://192.168.1.1:35357/v2.0")
key_session = session.Session(auth=keystone)
nova = nova_client.Client(2, session=key_session)
#let's assume the servers array is already populated
for server in servers:
server_obj = nova.servers.get(server.server_id) #Exception happens here
I need to run this as admin as servers can belong to any tenant and they might even be deleted by the cron job.
Thanks for any help!
UPDATE:
The information I need was that I can use the admin tenant for retrieving all servers (regardless of their owner). This allows me also to use the publicURL.
My current solution looks like this:
from keystoneclient.auth.identity import v2
from keystoneclient import session
from novaclient import client as nova_client
auth = v2.Password(auth_url="http://192.168.1.1:5000/v2.0",
username="admin",
password="my_secrete",
tenant_name="admin") # the admin's tenant
auth_session = session.Session(auth=auth)
nova = nova_client.Client(2, session=auth_session)
for server in servers:
... do stuff like nova.servers.get("some id")
In order to get a list of servers from all tenants, you need to perform two tasks:
Log in as a user with admin privileges, and
Tell the Nova API that you want a list of servers for all tenants.
Logging in as an admin user
It looks like you're trying to use the admin_token defined in keystone.conf for authentication. This may work, but this mechanism is meant primarily as a means of bootstrapping Keystone. When interacting with the other services, you are meant to log in using a username/password pair that has been defined in Keystone with admin credentials. I think that what #anoop.babu has given you will work just fine:
>>> nova = nova_client.Client('2', USERNAME, PASSWORD,
PROJECT_ID, AUTH_URL)
Where:
USERNAME = admin
PASSWORD = password_for_admin_user
PROJECT_ID = admin
AUTH_URL = http://your_api_server:5000/v2.0
We can test this client out using something like:
>>> nova.hypervisors.list()
[<Hypervisor: 2>]
That tells us we've authenticated successfully.
Listing all servers
If we simply called nova.servers.list(), we are asking for a list of Nova servers owned by the admin tenant, which should generally be empty:
>>> nova.servers.list()
[]
In order to see servers from other tenants, you need to pass the all_tenants search option:
>>> nova.servers.list(search_opts={'all_tenants':1})
[<Server: cirros0>]
And that should get you where you want to be.
Here you have given auth_url as your endpoint. endpoint is actually the publicURL of the service.
You can find the publicURL using the below CLI Command
nova endpoints
and check for keystone details.
You can also get an authenticated keystone object using below api version without using endpoint,
import keystoneclient.v2_0.client as keystone_client
keystone = keystone_client.Client(auth_url = my_auth_url , username = my_user_name , password = my_password , tenant_name = my_tenant_name )
And create a session object as below,
from keystoneclient import session
key_session = session.Session(auth=keystone)
An alternate and simplified approach to get nova client authorized without keystone is,
from novaclient import client as nova_client
nova = nova_client.Client('2', USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)
server_obj = nova.servers.find( name=my_server_name )