The execution_date_gte parameter seems to have no effect in below Airflow REST API dagRuns call.
curl -X GET 'http://localhost:8080/api/v1/dags/demand_forecast/dagRuns' -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{"execution_date_gte": "2023-02-02T00:00:00+00:00"}' --user "airflow:airflow"
Response:
{
"dag_runs": [
{
"conf": {},
"dag_id": "demand_forecast",
"dag_run_id": "scheduled__2022-02-15T00:00:00+00:00",
"end_date": "2022-08-22T08:46:37.026194+00:00",
"execution_date": "2022-02-15T00:00:00+00:00",
"external_trigger": false,
"logical_date": "2022-02-15T00:00:00+00:00",
"start_date": "2022-08-22T08:46:15.451700+00:00",
"state": "success"
}
],
"total_entries": 1
}
Here the returned dag run has an execution_date which is smaller than provided execution_date_gte (=2023-02-02T00:00:00+00:00).
execution_date_gte should be passed as a query parameter to the GET request.
Related
I am trying to write a script to move repos in a project to another project but I am getting a 400 error whenever I try.
My python requests line looks like:
url = 'https://bitbucketserver.com/rest/api/1.0/projects/example1/repos/repo1'
token = 'TokenString'
response = requests.put(url, headers={'Content-Type': 'application/json', 'Authorization': 'Bearer' + token}, data={'project': {'key': 'NEW_PROJECT'}}, verify=False)
I get a response 400 that says 'Unexpected character ('p' (code112)): expected a valid value (number, string, array, object, true, false, or null) at [Source: com.atlassian.stash.internal.web.util.web.CountingServletInputStream#7ccd7631; line 1, column 2]
I'm not sure where my syntax is wrong
Not python, but work for me via curl:
curl -u 'USER:PASSWORD' --request PUT \
--url 'https://stash.vsegda.da/rest/api/1.0/projects/OLD_PROJECT/repos/REPO_TO_MOVE' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"project": {"key":"NEW_PROJECT"}
}'
Maybe can someone help.
I'm trying to scrape an Amazon page with browserless:
curl -X POST \
"https://chrome.browserless.io/content?token=<token>" \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d ' {
"url": "https://www.amazon.com/gp/your-account/order-details/?orderID=114-5444651-3149007",
"elements": [{
"selector": "a",
}],
"cookies": [
<many cookies>
],
}'
but I keep getting:
[{"message":"\"elements\" is not allowed","path":["elements"],"type":"object.unknown","context":{"child":"elements","label":"elements","value":[{"selector":"a","timeout":10000}],"key":"elements"}}]%
If I exclude the elements object, it works fine but returns the entire 6,000 lines of <html>.
(What I actually want is document.getElementsByClassName('shipment')[0].innerText)
When I try the examples (from the docs) they work fine.
Its because elements only available for /scrape API
You are using /content
An api exists for obtaining all time-entries for a workspace, /workspaces/{workspaceId}/time-entries, and the ability to obtain time-entries for a user, /workspaces/{workspaceId}/user/{userId}/time-entries.
Can filtering be added to /workspaces/{workspaceId}/time-entries? Start Date would be nice.
Would be possible to add obtain time-entries for a project?
/workspaces/{workspaceId}/projects/{projectId}/time-entries, with filtering of course
For the new clockify report API, which is documented here (https://clockify.me/developers-api#tag-Reports), you can use following requests.
Filtering time entries by user and date:
curl --request POST \
--url https://reports.api.clockify.me/v1/workspaces/<YOUR WORKSPACE ID>/reports/summary \
--header 'content-type: application/json' \
--header 'x-api-key: <YOUR API KEY>' \
--data '{
"dateRangeStart": "2020-08-13T00:00:00.000Z",
"dateRangeEnd": "2020-08-13T23:59:59.000Z",
"summaryFilter": {"groups": ["USER", "TIMEENTRY"]},
"exportType": "JSON",
"users": {
"ids": ["<USER ID>"],
"contains": "CONTAINS",
"status": "ALL"
}
}'
All time entries for a certain project:
curl --request POST \
--url https://reports.api.clockify.me/v1/workspaces/<YOUR WORKSPACE ID/reports/summary \
--header 'content-type: application/json' \
--header 'x-api-key: <YOUR API KEY>' \
--data '{
"dateRangeStart": "2020-08-13T00:00:00.000Z",
"dateRangeEnd": "2020-08-13T23:59:59.000Z",
"summaryFilter": {"groups": ["PROJECT", "TIMEENTRY"]},
"exportType": "JSON",
"projects": {"ids" : ["<PROJECT ID>"]}
}'
If you remove the group TIMEENTRY you will just get the sums as result, not all the separate time entries.
Of course you can fetch all time entries for a project grouped by a user like this (filtering by project and grouping by USER:
"summaryFilter": {"groups": ["PROJECT", "USER", "TIMEENTRY"]},
According to https://developer.openstack.org/api-ref/container-infrastructure-management/#create-new-cluster all I would need to create the cluster is pass the parameters like:
curl --header "X-Auth-Token: blah" \
-X POST https://myopenstack:9511/v1/clusters -d name="Swarm-cluster-ansible" -d cluster_template_id="7402f9d3-4881-440f-8496-08d420935f58" -d node_count=2 -d keypair="k8s-gitlab-ci"
It is giving me:
{"errors": [{"status": 400, "code": "client", "links": [], "title": "Unknown argument: \"cluster_template_id, node_count, keypair, name\"", "detail": "Unknown argument: \"cluster_template_id, node_count, keypair, name\"", "request_id": ""}]}
If I try it this way:
curl --header "X-Auth-Token: blah" \
-X POST https://myopenstack:9511/v1/clusters -d cluster='{
"name":"swarm",
"master_count":1,
"discovery_url":null,
"cluster_template_id":"7402f9d3-4881-440f-8496-08d420935f58",
"node_count":1,
"keypair":"k8s-gitlab-ci",
"master_flavor_id":null,
"labels":{
},
"flavor_id":null
}'
{"errors": [{"status": 400, "code": "client", "links": [], "title": "Invalid input for field/attribute cluster", "detail": "Invalid input for field/attribute cluster. Value: '{\n \"name\":\"swarm\",\n \"master_count\":1,\n \"discovery_url\":null,\n \"cluster_template_id\":\"7402f9d3-4881-440f-8496-08d420935f58\",\n \"node_count\":1,\n \"keypair\":\"k8s-gitlab-ci\",\n \"master_flavor_id\":null,\n \"labels\":{\n },\n \"flavor_id\":null\n}'. unable to convert to Cluster. Error: __init__() takes exactly 1 argument (2 given)", "request_id": ""}]}
Any idea?
EDIT: I am able to do a GET and retrieve the list of existing clusters.
-H "Content-Type: application/json" is enough in this case so the body is interpreted as JSON.
I am fetching process instances by passing the variables. I am using historic process instance query api.
Reference link is as follows:
http://localhost:8082/activiti-app/api-explorer.html#!/process-instances/getHistoricProcessInstancesUsingPOST
Curl command for this API is as follows:
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Authorization: Basic YWRtaW5AYXBwLmFjdGl2aXRpLmNvbTphZG1pbg==' -d '{ \
"variables": [
{
"name": "location",
"operation": "equals",
"value": "banglore"
}
]
}' 'http://localhost:8082/activiti-app/api/enterprise/historic-process-instances/query'
The above example worked for me. But I want to pass multiple values at once for the value. For example, I need to pass India, Banglore, Puna, Delhi etc... as values for the value at once. Is this possible to pass multiple values to the value at once. Can any one provide solution in this?
Thanks & Regards
Shilpa Kulkarni