Using Airflow 2.0.2, I'm trying to use the airflow API to trigger DAG Runs. When I run a simple GET like
curl -X GET --user "fooUser:passw0rd" "${ENDPOINT_URL}/api/v1/pools"
I get expected results:
{
"pools": [
{
"name": "default_pool",
"occupied_slots": 0,
"open_slots": 128,
"queued_slots": 0,
"running_slots": 0,
"slots": 128
}
],
"total_entries": 1
So the user fooUser does have some basic access to the API. But, trying to run
curl -X POST -H "Content-Type: application/json" -d '{}' --user "fooUser:passw0rd" "${ENDPOINT_URL}/api/v1/dags/myDag/dagRuns"
I get
{
"detail": null,
"status": 403,
"title": "Forbidden",
"type": "https://airflow.apache.org/docs/2.0.2/stable-rest-api-ref.html#section/Errors/PermissionDenied"
}
If I grant user fooUser the Admin role and use the same curl command, I successfully get
{
"conf": {},
"dag_id": "myDag",
"dag_run_id": "manual__2021-12-13T21:37:42.959274+00:00",
"end_date": null,
"execution_date": "2021-12-13T21:37:42.959274+00:00",
"external_trigger": true,
"start_date": "2021-12-13T21:37:42.964609+00:00",
"state": "running"
}
I don't want this user to have Admin permissions though. I want just enough to allow them to trigger DAG Runs using the API. But looking at the list of permissions granted to Admin I can't tell which my fooUser needs in order to accomplish this.
Which specific permission(s) does a user need in order to be allowed to trigger DAG Runs using the Airflow API?
testing a little bit, I've come to these rules:
[can create on DAG Runs, can edit on DAGs]
These are just to TRIGGER a dag run. If you want to edit a dag run, you'll have too alter those permissions.
Airflow version 2.3.4
[UPDATE]
Here are all permissions needed for each resource:
https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html#dag-level-permissions
Related
I want to generate a git username/password for CodeCommit, I see how to do that from the AWS Web Console , but is there a way to do it via the aws cli?
You can use aws iam create-service-specific-credential, documented here
aws iam create-service-specific-credential --service-name codecommit.amazonaws.com --user-name xxxxxx
{
"ServiceSpecificCredential": {
"CreateDate": "2022-10-28T12:19:19+00:00",
"ServiceName": "codecommit.amazonaws.com",
"ServiceUserName": "xxxxxx-at-yyyyyyyyyyyy",
"ServicePassword": "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ",
"ServiceSpecificCredentialId": "LLLLLLLLLLLLLLLLLLLLL",
"UserName": "xxxxxx",
"Status": "Active"
}
}
The ServiceUserName can be used as git user and ServicePassword can be used as git password.
So I am using vault approle with airflow as secret backend and it keeps throwing permission denied error on $Vault_ADDR/v1/auth/approle/login. I tried using approle from CLI like:
vault write auth/approle/login role_id="$role_id" secret_id="$secret_id"
and it works fine.
But if I try it using API:
curl --request POST --data #payload.json $VAULT_ADDR/v1/auth/approle/login
where payload.json contains secret and role id. It fails with permission denied.
Here is my policy:
vault policy write test-policy -<<EOF
path "kv/data/airflow/*" {
capabilities = [ "read", "list" ]
}
EOF
It works fine for reading on this path.
and role:
vault write auth/approle/role/test-role token_ttl=4h token_max_ttl=5h token_policies="test-policy"
Don't know why it is failing with API.
An important thing to mention is that I am using cloud based HCP Vault.
The problem is with your app_role authentication.You need to provide admin namespace in your url.
Change this:
curl --request POST --data #payload.json $VAULT_ADDR/v1/auth/approle/login
To this:
curl --request POST --data #payload.json $VAULT_ADDR/v1/admin/auth/approle/login
Furthermore, if you are trying to access from a third party tool like airflow then try adding "namespace=admin" in your config file.
Found the problem. HCP vault uses namespace (default = admin). Namespace was needed in url :
$VAULT_ADDR/v1/admin/auth/approle/login
but the problem still exists in Airflow's Hashicorp provider. Changing the auth_mount_point still concatenates it at the end as :
$VAULT_ADDR/v1/auth/{$auth_mount_point}
I am working on "Patch an AMI and update an Auto Scaling group" and followed the AWS document to configure but I am stuck at "Task 3: Create a runbook, patch the AMI, and update the Auto Scaling group" with the below error. To fix it I have added "user data" while starting the instance(startInstances). As it's accepting only base64, converted and provided base64(UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==).
I tried to execute with the below user data but both are not working, even I tried to apply a new step with the same commands but failed to patch AMI.
Tried the below script:
<powershell> powershell.exe -Command Start-Service -Name AmazonSSMAgent </powershell> <persist>true</persist>
Tried to start and restart SSM agent.
Restart-Service AmazonSSMAgent
base64: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
YAML sample:
mainSteps:
- name: startInstances
action: 'aws:runInstances'
timeoutSeconds: 1200
maxAttempts: 1
onFailure: Abort
inputs:
ImageId: '{{ sourceAMIid }}'
InstanceType: m3.large
MinInstanceCount: 1
MaxInstanceCount: 1
SubnetId: '{{ subnetId }}'
UserData: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
Still, I am seeing the below error.
Step timed out while step is verifying the SSM Agent availability on the target instance(s). SSM Agent on Instances: [i-xxxxxxxx] are not functioning. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
Your suggestion/solutions help me a lot. Thank you.
I have troubleshoot and fixed the issue.
The issue was security group is missing on the instance. To communicate the SendCommand API of SSM Service with SSM agent on instance needs a security group that allows HTTPS port 443. I have attached SG allowing 443 port then the SSM agent can communicate with the EC2 instance.
EC2 instance IAM role should have SSM agent full access policy attached to it.
We might get the same issue when the SSM agent is not running on the EC2 instance, for that we need to provide user-data or add a new step in YAML or JSON on systems manager Documents.
If you are working on a windows instance use the below script to start the SSM agent. If its Linux server uses Linux script/commands.
{
"schemaVersion": "2.0",
"description": "Start SSM agent on instance",
"mainSteps": [
{
"action": "aws:runPowerShellScript",
"name": "runPowerShellScript",
"inputs": {
"runCommand": [
"Start-Service AmazonSSMAgent"
]
}
}
]
}
Following "OpenStack installation guide for Ubunut 12.04 LTS" have configured a network of virtual machine with keystone e swift services. I have configured swift to use keystone service for authentication.All work perfectly. The problem has arisen when trying to configure a second swift service on keystone. On key stone i have create the two swift services called "swift" and "swift2" both with type property set set to the value "object-store". I set the endpoints for both services: "http://proxyserver1:8080/v1/AUTH_%(tenant_id)s" for first swift service and "http://proxyserver2:8080/v1/AUTH_%(tenant_id)s" for the second. For each swift service i have created a user with role "admin" and both belonging to same tenant "service". When i try to authenticate a user on keystone using:
curl -s -d '{"auth": {"tenantName": "service", "passwordCredentials": {"username": "swift", "password": "PASSWORD"}}}' -H 'Content-type: application/json' http://identity-manager:5000/v2.0/tokens
I receive a response with a incorrect serviceCatalog array. It contains only two endpoints: endpoints for keystone itself and one for an object-store service. So is missing one object-store service endpoint. Moreover the endpoits for the only object-store service returned is wrong because is a property mix of the two object-store service:
{
"endpoints": [
{
"adminURL": "http://proxyserver2:8080",
"region": "regionOne",
"internalURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41",
"id": "0d30428e6a2d4035baf1c84401c8ff1b",
"publicURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41"
}
],
"endpoints_links": [],
"type": "object-store",
"name": "swift2"
}
My question is whether it is possible to configure two swift cluster on keystone. If the answer is yes, where I could have been wrong?
I recently got a hubot setup for irc and works fine. I'm trying to add this script.
I'm not entirely understanding the setup instructions however. The setup instructions read
curl -H "Authorization: token <your api token>" \
-d '{"name":"web","active":true,"events":["pull_request"],"config":{"url":"<this script url>","content_type":"json"}}' \
https://api.github.com/repos/<your user>/<your repo>/hooks
I don't understand what the "url":"<this script url>" refers to. Anyone know?
I'm deploying to heroku if that helps.
Add more explanation for #MikeKusold 's answer
The curl command is to create the github hook, therefore it is set hook to the receiver for the notification.
"config": {
"url": "http://example.com/webhook",
"content_type": "json"
}
The hook is the hubot plugin, so the url path is defined in that script, see line
robot.router.post "/hubot/gh-pull-requests", (req, res) ->
Below two lines in that scripts tell you what is after the path, it has parameters room & type
user.room = query.room if query.room
user.type = query.type if query.type
Hubot itself define the port number, it route the path to the plugin as it requested, check this part in robot.coffee, the default port is 8080
Therefore the URL is like below
http://<your hubot sever>:8080/hubot/gh-pull-requests/?room=<room>&type=<type>
Actually you can use curl command to test it towards hubot directly first.
Add <HUBOT_URL>:<PORT>/hubot/gh-pull-requests?room=<room>[&type=<type>] url hook via API
That is the URL