Airflow LDAP with anonymous user - airflow

Is it possible to setup Airflow authentication process with LDAP for admins and superusers allowing read only access for anonymous user?
I wish I could provide code sample or something, but I don't even know where to start. For now I have a working LDAP connection and an ability to login without filters with my user.

To do so, you need to enable RBAC along with LDAP.
Airflow ships with a set of roles by default: Admin, User, Op, Viewer,
and Public. Only Admin users could configure/alter the permissions for
other roles. But it is not recommended that Admin users alter these
default roles in any way by removing or adding permissions to these
roles.
This blog post shows how steps are done.
In the AIRFLOW_HOME directory:
modify airflow.cfg:
comment the existing LDAP configuration
comment 'authentication = True'
update 'rbac = True'
It's worth noting that there are some deprecated items in [webserver] section per latest version.
create webserver_config.py file in the AIRFLOW_HOME directory with below configurations.
truncate ab_user & ab_user_role table in your meta database
restart airflow-webserver
update AUTH_USER_REGISTRATION_ROLE = "Viewer" in webserver_config.py
restart airflow-webserver again, now any new user login will be treated as viewer, login as admin to change their role accordingly.
Ensure the python-ldap was installed: pip install python-ldap. In case getting error, following this thread.
webserver_config.py:
import os
from airflow import configuration as conf
from flask_appbuilder.security.manager import AUTH_LDAP
basedir = os.path.abspath(os.path.dirname(__file__))
SQLALCHEMY_DATABASE_URI = conf.get('core', 'SQL_ALCHEMY_CONN')
CSRF_ENABLED = True
AUTH_TYPE = AUTH_LDAP
AUTH_ROLE_ADMIN = 'Admin'
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Admin"
# AUTH_USER_REGISTRATION_ROLE = "Viewer"
AUTH_LDAP_SERVER = 'ldaps://$ldap:636/
AUTH_LDAP_SEARCH = "DC=domain,DC=organization,DC=com"
AUTH_LDAP_BIND_USER = 'CN=bind-user,OU=serviceAccounts,DC=domain,DC=organization,DC=com'
AUTH_LDAP_BIND_PASSWORD = '**************'
AUTH_LDAP_UID_FIELD = 'sAMAccountName'
AUTH_LDAP_USE_TLS = False
AUTH_LDAP_ALLOW_SELF_SIGNED = False
AUTH_LDAP_TLS_CACERTFILE = '/etc/pki/ca-trust/source/anchors/$root_CA.crt'

Related

db.create_all() doesn't create a database in a desired directory

I am trying to create a database for my Flask application in the main directory of my project. This is my code for initializing a database:
app.config["SQLALCHEMY_DATABASE_URI"] = 'sqlite:///users.db'
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db = SQLAlchemy(app)
Flask requires application context, so this is how I create the database:
$ flask shell
>>> db.create_all()
I also tried doing it with:
$ python
>>> from app import app, db
>>> app.app_context().push()
>>> db.create_all()
Both of these options create the database in the /instance directory. Is there any way to get around this and create it in the main directory of the project?
The instance path is the preferred and default location for the database. I recommend you to use this one for security reasons. However, it is also possible to choose an alternative solution in which the full length of the path is specified in the configuration.
The following configuration corresponds to an outdated variant, where the database is created in the current working directory. Please don't use this anymore.
app.config['SQLALCHEMY_DATABASE_URI'] ='sqlite:///' + os.path.join(os.getcwd(), 'users.db')
This corresponds to the current solution.
app.config['SQLALCHEMY_DATABASE_URI'] ='sqlite:///' + os.path.join(app.instance_path, 'users.db')

Issues after upgrading DB to mariaDB

I have recently built my rundeck server and created a DB using mariaDB and pointed rundeck to this. I followed the official documentation for this on the rundeck site. Since I have changed from the systemDB to mariaDB the service no longer starts.
My rundeck-config.properties file looks like this:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/var/lib/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
#change hostname here
grails.serverURL=http://IP OF SERVER:4440
dataSource.driverClassName=
dataSource.url = jdbc:mysql://IP OF SERVER/rundeck?autoReconnect=true&useSSL=false
dataSource.username = DB User
dataSource.password = Password
grails.plugin.databasemigration.updateOnStart=true
autoReconnect=true
#to store projects on backend
rundeck.projectsStorageType=db
#Encryption for key storage
rundeck.storage.provider.1.type=
rundeck.storage.provider.1.path=keys
rundeck.storage.converter.1.type=jasypt-encryption
rundeck.storage.converter.1.path=keys
rundeck.storage.converter.1.config.encryptorType=custom
rundeck.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.storage.converter.1.config.provider=BC
#Encryption for project config storage
rundeck.projectsStorageType=db
rundeck.config.storage.converter.1.type=jasypt-encryption
rundeck.config.storage.converter.1.path=projects
rundeck.config.storage.converter.1.config.password=7ee99cf09ffc59e7
rundeck.config.storage.converter.1.config.encryptorType=custom
rundeck.config.storage.converter.1.config.algorithm=PBEWITHSHA256AND128BITAES-CBC-BC
rundeck.config.storage.converter.1.config.provider=BC
rundeck.feature.repository.enabled=true
Can anyone help with this
A couple of things here:
Your dataSource.driverClassName is empty, set it to org.mariadb.jdbc.Driver, check the full example here.
Your rundeck.storage.provider.1.type is also empty, set it as rundeck.storage.provider.1.type=db.

Cannot create wordpress web app in azure with Terraform

I am trying to create wordpress web application on Azure with Terraform. Each web app has own database. I manage to create resource groups, database server and databases but i cannot create wordpress web app. I can create a web app and all works fine but not wordpress. When i make wordpress web app manually and import data to see what is different i see that wordpress has repo_url and branch pointing to wordpress-azure repo on github. When i try to incorporate this in code i get error message.
resource "azurerm_mysql_database" "testtt" {
name = "testtt"
resource_group_name = azurerm_resource_group.RG_mok_2024.name
server_name = azurerm_mysql_server.wp-db-mok-2024.name
charset = "utf8"
coll`enter code here`ation = "utf8_unicode_ci"
}
resource "azurerm_app_service" "testtt" {
name = "testtt"
location = azurerm_resource_group.RG_mok_2024.location
resource_group_name = azurerm_resource_group.RG_mok_2024.name
app_service_plan_id = azurerm_app_service_plan.appserviceplan-wordpress-mok-6.id
site_config {
dotnet_framework_version = "v4.0"
scm_type = "GitHub"
default_documents = ["Default.htm","Default.html","Default.asp","index.htm","index.html","iistart.htm","default.aspx","index.php","hostingstart.html"]
}
source_control {
repo_url = "https://github.com/azureappserviceoss/wordpress-azure"
branch = "master"
}
connection_string {
name = "defaultConnection"
type = "MySQL"
value = "Database=testtt;Data Source=wp-db-mok-2024.mysql.database.azure.com;User Id=mysqladminun#wp-db-mok-2024;Password=password"
}
}
The error message i get when i am using source_control part of a code is
Error: "source_control": this field cannot be set
The source_control field is only exported. It cannot be used to connect a deploymentsource.
To execute an automated deployment directly, there is currently only the workaround via a local-exec null_resource.
The sourcecontrol integration can be created using an azure cli / powershell script which is then executed by the local-exec provisioner.
This works as follows:
resource "null_resource" "scm_integration" {
provisioner "local-exec" {
command = "${path.module}/enable_scm.ps1 -webAppName ${azurerm_app_service.testtt.name} -appResourceGroupName ${azurerm_resource_group.RG_mok_2024.name} -scmBranch master -repoUrl https://github.com/azureappserviceoss/wordpress-azure"
interpreter = ["pwsh", "-Command"]
}
}
In addition you need the powershell script enable_scm.ps1.
In this GitHub Issue the workaround incl. script is described completely
According to terraform doc about azurerm app service, the field source_control is only exported. And it is ONLY exported when the scm_type is set to LocalGit. You have set it to GitHub, and it is an output value, so according to the documentation, you dont need that.
Furthermore in line 6 there is enter code here but I guess that this was pasted here by accident.
Finally, I hope that in the connection string value, your database password is not "password".
Can you try to set the source_control section before the site_config? There is an open issue for the Terraform azurerm_app_service provider that suggests this might be a work-around.
https://github.com/terraform-providers/terraform-provider-azurerm/issues/3696

Launch RancherOs with Openstack and Terraform

Hi running the latest OpenStack, Terraform and RancherOs.
From the Openstack UI I can get rancher to work and I can pass in my own ssh keys for instance but you need to explicitly click the configuration drive otherwise it will not accept the user data.
I don't think this is possible with terraform is it?
resource "openstack_compute_instance_v2" "terraform-rancher" {
name = "terraform-rancher"
image_name = "RancherOs"
flavor_name = "t2.xlarge"
security_groups = ["default"]
#This is on the same path as my terraform file.
user_data = "${file("test.txt")}"
network {
name = "provider"
}
}
The instance launches and gets created but when I look at the logs Rancher cannot seem to find the config with:
cloud-init: Datasource unavailable, skipping: cloud-drive: /media/config-2 (lastError: no such file or directory)
From Openstack UI it works fine, but as stated you have to click the config drive check box.
cloud-init: Datasource available: cloud-drive: /media/config-2
To get it work like in the UI, the config_drive parameter in the instance configuration needs to be set to true.

How to access a secured Nexus with sbt?

I'm trying to access a Nexus repository manager which requires some basic authentication. Everything works fine from Maven2 but when I try to configure things in SBT it can't find the artifacts. It is using a custom repository pattern (see this related question) but I don't think that should matter. In any case the relevant configuration is here.
Project.scala:
val snapshotsName = "Repository Snapshots"
val snapshotsUrl = new java.net.URL("http://nexusHostIp:8081/nexus/content/repositories/snapshots")
val snapshotsPattern = "[organisation]/[module]/[revision]-SNAPSHOT/[artifact]-[revision](-[timestamp]).[ext]"
val snapshots = Resolver.url(snapshotsName, snapshotsUrl)(Patterns(snapshotsPattern))
Credentials(Path.userHome / ".ivy2" / ".credentials", log)
val dep = "group" % "artifact" % "0.0.1" extra("timestamp" -> "20101202.195418-3")
~/.ivy2/.credentials:
realm=Snapshots Nexus
host=nexusHostIp:8081
user=nexususername
password=nexuspassword
According to a similar discussion in the SBT user group this should work fine but I am getting the following when I try to build.
==== Repository Snapshots: tried
[warn] -- artifact group#artifact;0.0.1!artifact.jar:
[warn] http://nexusHostIp:8081/nexus/content/repositories/snapshots/group/artifact/0.0.1-SNAPSHOT/artifact-0.0.1-20101202.195418-3.jar
I'm fairly certain this is a credentials problem and not something else because I can hit the URL it says it is trying directly and download the jar (after authenticating).
I have also tried declaring the credentials inline (even though it is less than ideal) like so:
Credentials.add("Repository Snapshots", "nexusHostIp", "nexususername", "nexuspassword")
Here's what I did (sbt 0.13 + artifactory - setup should be similar for nexus):
1) Edited the file ~/.sbt/repositories as specified here: http://www.scala-sbt.org/0.13.0/docs/Detailed-Topics/Proxy-Repositories.html
[repositories]
local
my-ivy-proxy-releases: http://repo.company.com/ivy-releases/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
my-maven-proxy-releases: http://repo.company.com/maven-releases/
2) Locked down my artifactory to disable anonymous access.
3) Created a credentials file in ~/.sbt/.credentials
realm=Artifactory Realm
host=artifactory.mycompany.com
user=username
password=password
4) Created a file under ~/.sbt/0.13/plugins/credentials.sbt that wires up the default credentials
credentials += Credentials(Path.userHome / ".sbt" / ".credentials")
Now when my project loads sbt hits artifactory like normal.
The reason I did it this way is to keep the repository definitions, etc, out of the project files to enable teams to have flexibility (they can set up an internal server to serve in-progress artifacts, etc).
-Austen
UPDATE: This answer does not work in recent sbt versions - see Austen's answer instead.
Alright I finally got this sorted out.
snapshotsName can be anything. realm in .credentials must be the HTTP Authentication realm that shows up when trying to hit the URL of the repository (nexus in my case). realm is also the first parameter of Credentials.add. So that line should have been
Credentials.add("Sonatype Nexus Repository Manager", "nexusHostIp", "nexususername", "nexuspassword")
The host name is just the ip or DNS name. So in .credentials host is just nexusHostIp without the port number.
So the working Project configuration is:
val snapshotsName = "Repository Snapshots"
val snapshotsUrl = new java.net.URL("http://nexusHostIp:8081/nexus/content/repositories/snapshots")
val snapshotsPattern = "[organisation]/[module]/[revision]-SNAPSHOT/[artifact]-[revision](-[timestamp]).[ext]"
val snapshots = Resolver.url(snapshotsName, snapshotsUrl)(Patterns(snapshotsPattern))
Credentials(Path.userHome / ".ivy2" / ".credentials", log)
val dep = "group" % "artifact" % "0.0.1" extra("timestamp" -> "20101202.195418-3")
With a .credentials file that looks like:
realm=Sonatype Nexus Repository Manager
host=nexusHostIp
user=nexususername
password=nexuspassword
Where "Sonatype Nexus Repository Manager" is the HTTP Authentication realm.
Following the SBT Documetation:
There are two ways to specify credentials for such a repository:
Inline
credentials += Credentials("Some Nexus Repository Manager", "my.artifact.repo.net", "admin", "password123")
External File
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
The .credentials file is a properties file with keys realm, host, user, and password. For example:
realm=Some Nexus Repository Manager
host=my.artifact.repo.net
user=admin
password=password123
If SBT launcher is failing to download a new version of SBT from your proxy, and that ~/.sbt/boot/update.log is showing that you're getting 401 authentication errors, you can use the environment variable SBT_CREDENTIALS to specify where the ivy credential file is.
Either of these should work and download the new sbt version:
SBT_CREDENTIALS='/home/YOUR_USER_NAME/.ivy2/.credentials' sbt
Putting export SBT_CREDENTIALS="/home/YOUR_USER_NAME/.ivy2/.credentials" in your .bashrc (or .zshrc), start a new shell session and then run sbt
(You'll need have the ~/.ivy2/.credentials file setup like other answers here has shown)
Source: https://github.com/sbt/sbt/commit/96e5a7957c830430f85b6b89d7bbe07824ebfc4b
This worked for me. I'm using SBT version 0.13.15:
~/.ivy2/.my-credentials (host without port):
realm=Sonatype Nexus Repository Manager
host=mynexus.mycompany.com
user=my_user
password=my_password
build.sbt (nexus url with port):
import sbt.Credentials
...
credentials += Credentials(Path.userHome / ".ivy2" / ".my-credentials")
...
resolvers in ThisBuild ++= Seq(
MavenRepository("my-company-nexus", "https://mynexus.mycompany.com:8081/repository/maven-releases/")
)
Check for all files containing credentials.
For me I had a new project in sbt 1.0 (instead of good old 0.13), where I had a ~/.sbt/1.0/global.sbt file containing my credentials, which I forgot about. So after a mandatory password change, the artifactory downloads was broken and locking my account.
Would be nice if the chain of credentials and files filling them can be easily inspected. Would also be nice if SBT was a bit more careful and first checking if authentication/authorization is correct, before starting tot download X files and locking my account after 3 misauthenticated attempts.

Resources