Failed when parsing body as json - call Rasa API with request.post() - python-requests

I'm writing a function to call Rasa API for intent prediction. Here is my code:
def run_test():
url = "http://localhost:5005/model/parse"
obj = {"text": "What is your name?"}
response = requests.post(url, data=obj)
print(response.json())
I also start Rasa server with this command: rasa run -m models --enable-api --cors "*" --debug
And here is what I got from Rasa server terminal:
In the terminal that I excuted run_test(), I got this result:
{'version': '2.7.1', 'status': 'failure', 'message': 'An unexpected error occurred. Error: Failed when parsing body as json', 'reason': 'ParsingError', 'details': {}, 'help': None, 'code'
: 500}
Anybody help me to solve this problem? Thank you very much!

I found the solution:
Only need to use json.dumps() the object, because object in Python is different than object in json.
def run_test():
url = "http://localhost:5005/model/parse"
obj = {"text": "What is your name?"}
response = requests.post(url, data=json.dumps(obj))
print(response.json())

Related

Got Future <Future pending> attached to a different loop when using eventhub aio in Fastapi python

I am using python 3.9, azure-eventhub 5.10.1, azure-eventhub-checkpointstoreblob-aio, and I have following code that throws the exception regularly (we also have lots of successful case that sends the message with no error), but also got the runtime errors in the logs. Wondering what i did wrong here. Thanks
async def send_to_eventhub(self, producer, event_list, timestamp_event_received):
try:
async with producer:
event_data_batch = await producer.create_batch()
for (occupancy_status, hardware_id) in event_list:
# set message properties for space report
message_body = {
...
}
message = EventData(json.dumps(message_body))
message.properties = {
...
}
# Send message to the eventhub
logger.info("Sending message %s, %s", message, message.properties)
event_data_batch.add(message)
await producer.send_batch(event_data_batch)
logger.info(
"Message successfully sent %s, %s", message, message.properties
)
except (
EventDataError,
EventDataSendError,
OperationTimeoutError,
OwnershipLostError,
RuntimeError,
) as event_ex:
logger.error(
"eventhub Sending Error: Error ocurred\
sending message for hardware id %s %s %s",
hardware_id,
event_ex,
traceback.format_exc(),
) ```
And this function got called in the follow Fastapi
<!-- begin snippet: -->
#app.post(...)
async def handle_report(
...
):
...
try:
if len(incoming_data) > 0:
event_list = []
for sensor_data in incoming_data:
data = sensor_data["data"]
occupancy_status = json.loads(data)["value"]
hardware_id = sensor_data["properties"]["propertyList"][0]["value"]
event_list.append((occupancy_status, hardware_id))
await eventhub_helper.send_to_eventhub(
producer, event_list, received_timestamp
)
...`
<!-- end snippet -->
And the exception says:
`eventhub Sending Error: Error ocurred sending message for hardware id TSPR04ESH11000268 Task <Task pending name='Task-544711411' coro=<RequestResponseCycle.run_asgi() running at /opt/pysetup/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py:375> cb=[set.discard()]> got Future <Future pending> attached to a different loop Traceback (most recent call last):
File "/app/eventhub_helper.py", line 94, in send_to_eventhub
logger.info(
File "/opt/pysetup/.venv/lib/python3.9/site-packages/azure/eventhub/aio/_producer_client_async.py", line 218, in __aexit__
await self.close()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/azure/eventhub/aio/_producer_client_async.py", line 811, in close
async with self._lock:
File "/usr/local/lib/python3.9/asyncio/locks.py", line 14, in __aenter__
await self.acquire()
File "/usr/local/lib/python3.9/asyncio/locks.py", line 120, in acquire
await fut
RuntimeError: Task <Task pending name='Task-544711411' coro=<RequestResponseCycle.run_asgi() running at /opt/pysetup/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py:375> cb=[set.discard()]> got Future <Future pending> attached to a different loop`
I tried to reproduce this error, but it was hard because it went through with no error. Wondering if I did not consider concurrency enough. Did notice that "event_data_batch.add(message)" can cause error if that batch if full, but dont think it could cause runtime error and i know that message we sent is small

Airflow - Druid Operator is not getting host

I have a question for Druid Operator. I see that this test is successful, but I take a this error.
File "/home/airflow/.local/lib/python3.7/site-packages/requests/sessions.py", line 792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}")
I take a dag like this
DRUID_CONN_ID = "druid_ingest_conn_id"
ingestion = DruidOperator(
task_id='ingestion',
druid_ingest_conn_id=DRUID_CONN_ID,
json_index_file='ingestion.json'
)
Also I change the dag to look overload but I take same error.
Another step I change the type to like this but I have a different error
ingestion_2 = SimpleHttpOperator(
task_id='test_task',
method='POST',
http_conn_id=DRUID_CONN_ID,
endpoint='/druid/indexer/v1/task',
data=json.dumps(read_file),
dag=dag,
do_xcom_push=True,
headers={
'Content-Type': 'application/json'
},
response_check=lambda response: response.json()['Status'] == 200,
)
{"error":"Missing type id when trying to resolve subtype of [simple type, class org.apache.druid.indexing.common.task.Task]: missing type id property 'type'\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 1]"}
Finally I try giving Http connection in Druid Operator but I have a error like this
raise AirflowException(f'Did not get 200 when submitting the Druid job to {url}')
So that I am confused. I need a help. Thanks for answers.
P.S: We use 2.3.3 Airflow version

Airflow Error callback "on_failure_callback" is not executing all the lines in the function

I have a problem in the usage of the on_failure_callback. I have defined my error callback function to perform 2 "http post" requests and I have added a logging.error( ) message between the two. I notice that only one is getting executed. Is there any delay or some thing that I am missing here?
please help.
def custom_failure_function(context):
logging.error("These task instances ahhh")
to_json= json.loads(t_teams)
var1= json.dumps(to_json)
print(var1)
r = requests.post('https://myteamschannel/teams', data=var1,verify=False)
logging.error("hello")
runID='OPERATION_CONTEXT .OCV8.TEST2 alarm_object 193'
headers = {'Content-Type':'text/xml'}
alarmRequest='<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:oper=\"http://172.19.146.147:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object\"><soapenv:Header xmlns:wsa=\"http://www.w3.org/2005/08/addressing\"><wsu:Timestamp xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"><wsu:Created>2014-05-22T11:57:38.267Z</wsu:Created><wsu:Expires>2014-05-22T12:02:38.000Z</wsu:Expires></wsu:Timestamp><wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soapenv:mustUnderstand=\"1\"><wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"><wsse:Username>girws</wsse:Username><wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">Temip</wsse:Password></wsse:UsernameToken></wsse:Security></soapenv:Header> <soapenv:Body> <oper:Set_Request xmlns:oper=\"http://172.19.146.147:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object\"><EntitySpec><Natural> '+ runID + '</Natural></EntitySpec><Arguments> <Attribute_Values><Filtering_Type>' + 'AUTOFAIL' + '</Filtering_Type></Attribute_Values></Arguments></oper:Set_Request> </soapenv:Body> </soapenv:Envelope>'
r = requests.post('http://myerrorappli:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object', header=headers, data=alarmRequest,verify=True)
logging.error ("FAILED TASK")
logging.error("============================================")
The logs of my airflow are below. Its stopping at the "hello" message and not printing "FAILED TASK".
*** Reading local file: /data/airflow//logs/MOE_TEST_DAG/TeamsTest/2021-10-02T08:24:14.821970+00:00/3.log
[2021-10-02 10:24:36,535] {MOE_TEST.py:132} ERROR - These task instances ahhh
[2021-10-02 10:24:36,987] {MOE_TEST.py:138} ERROR - hello
From your description it's more likely that there is an issue with requests.post() try to add timeout to the request:
def custom_failure_function(context):
...
try:
r = requests.post('http://myerrorappli:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object', header=headers,
data=alarmRequest, verify=True, timeout=5)
except requests.Timeout:
logging.error("request timeout")
except requests.ConnectionError:
logging.error("request connection error")
logging.error("FAILED TASK")
logging.error("============================================")

Backslashes are not correctly encoded

I am at wits end.
My scenario is that I am using python requests to interact with Icinga2 API and I am trying to schedule a downtime. So I know how that is supposed to works and it works most of time. But unfortunately I am totally out of luck when the Icinga2 service I try to set a downtime to has name with a backslash in it.
My test environment:
Icinga2 2.9.0
Python 3.6.8 / Python 3.8.11
requests 2.27.0
Prerequisite: Create a host in Icinga. Create a service in Icinga with a "\"-character.
Python Code for Reproduction:
import requests
session = requests.Session()
session.hooks = {
"response": lambda r, *args, **kwargs: r.raise_for_status()
}
session.headers.update(
{
"Accept": "application/json",
"Content-Type": "application/json",
"cache-control": "no-cache",
}
)
session.auth = requests.auth.HTTPBasicAuth("user","password")
url = "https://icinga2-server.com:5665/v1/actions/schedule-downtime"
body = {
'comment': 'Downtime',
'author': ('icingaadmin',),
'start_time': 1605196800.0,
'filter': 'host.name==\"HOSTNAME\" && service.name==\"SERVICE\\NAME\"',
'end_time': 1605286800.0,
'fixed': True,
'type': 'Service'}
session.post(url, json=body, verify=False)
Result:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/.pyenv/versions/3.8.11/lib/python3.8/site-packages/requests/sessions.py", line 590, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/root/.pyenv/versions/3.8.11/lib/python3.8/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/root/.pyenv/versions/3.8.11/lib/python3.8/site-packages/requests/sessions.py", line 662, in send
r = dispatch_hook('response', hooks, r, **kwargs)
File "/root/.pyenv/versions/3.8.11/lib/python3.8/site-packages/requests/hooks.py", line 31, in dispatch_hook
_hook_data = hook(hook_data, **kwargs)
File "<stdin>", line 2, in <lambda>
File "/root/.pyenv/versions/3.8.11/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://icinga2-server.com:5665/v1/actions/schedule-downtime
I know very well that this error message indicates that Icinga2 could not find/match a service. But executing the command via curl is clearly working for me and I get a proper scheduled downtime!
CURL Request:
curl -k -s -u user:password -H 'Accept: application/json' -X POST 'https://icinga2-server.com:5665/v1/actions/schedule-downtime' -d '{"comment": "Downtime", "author": ["icingaadmin"], "start_time": 1605196800.0, "filter": "host.name==\"MSSQLSERVER\" && service.name==\"MSSQLSERVER\\\\INSTANCE2\"", "end_time": 1605286800.0, "fixed": true, "type": "Service"}'
CURL Answer (SUCCESS):
{"results":[{"code":200.0,"legacy_id":8.0,"name":"MSSQLSERVER!MSSQLSERVER\\INSTANCE2!137c9ef9-3150-4e57-ba0b-a22ddc6611d4","status":"Successfully scheduled downtime 'MSSQLSERVER!MSSQLSERVER\\INSTANCE2!137c9ef9-3150-4e57-ba0b-a22ddc6611d4' for object 'MSSQLSERVER!MSSQLSERVER\\INSTANCE2'."}]}
Alternate approaches that did not help:
session.post(url, data=json.dumps(body), verify=False)
string_body = json.dumps(body)
session.post(url, data=string_body, verify=False)
You can try adding an r to the front of the string after the filter key. Also, I think this closed parenthesis is unnecessary.
Python treats backslashes in strings as special characters, adding an 'r' before the string make it not treat them as special characters, and instead as backslashes
'filter': r'host.name=="HOSTNAME" && service.name=="SERVICE\NAME"'),
https://www.journaldev.com/23598/python-raw-string
body1 = {
'comment': 'Downtime',
'author': ('icingaadmin',),
'start_time': 1605196800.0,
'filter': 'host.name==\"HOSTNAME\" && service.name==\"SERVICE\\NAME\"',
'end_time': 1605286800.0,
'fixed': True,
'type': 'Service'}
body2 = {
'comment': 'Downtime',
'author': ('icingaadmin',),
'start_time': 1605196800.0,
'filter': r'host.name==\"HOSTNAME\" && service.name==\"SERVICE\\NAME\"',
'end_time': 1605286800.0,
'fixed': True,
'type': 'Service'}
body1==body2 -> False
body1['filter'] = 'host.name=="HOSTNAME" && service.name=="SERVICE\\NAME"'
body2['filter'] = 'host.name==\\"HOSTNAME\\" && service.name==\\"SERVICE\\\\NAME\\"'
Thank you very much for your support. I figured it out. I was able to successfully set a downtime with the following command to which Leo Wotzak beat me to.
body1 = {
'comment': 'Downtime',
'author': ('icingaadmin',),
'start_time': 1605196800.0,
'filter': 'host.name==\"HOSTNAME\" && service.name==\"SERVICE\\\\NAME\"',
'end_time': 1605286800.0,
'fixed': True,
'type': 'Service'}
I don't know why I didn't run a test with 4 backslashes earlier.
And to add some background to the situation. The initial problem was, that I was reading the output from available icinga services which gives me:
>>>> resp.json()['results'][1]['name']
'MSSQLSERVER!SERVICE\\NAME'
If I put that answer into the body it will fail because json converter does not convert 2 backslashes to 4 backslashes. Question is, what is the best way to tell json that 2 backslashes need to be 4 backslashes? My idea is to work with encoding:
svc_name = resp.json()['results'][1]['name'].encode("unicode-escape").decode("utf-8")
I don't know if that's the best way but it works.

Custom command result

When invoking a custom command, I noticed that only the logs are displayed. For example, if my Custom Comand script contains a retrun statement return "great custom command", I can't find it in the result. Both in API Java client or shell execution cases.
What can I do to be able to retrieve that result at the end of an execution?
Thanks.
Command definition in service description file:
customCommands ([
"getText" : "getText.groovy"
])
getText.groovy file content:
def text = "great custom command"
println "trying to get a text"
return text
Assuming that you service file contains the following :
customCommands ([
"printA" : {
println "111111"
return "222222"
},
"printB" : "fileB.groovy"
])
And fileB.groovy contains the following code :
println "AAAAAA"
return "BBBBBB"
Then if you run the following command : invoke yourService printA
You will get this :
Invocation results:
1: OK from instance #1..., Result: 222222
invocation completed successfully.
and if you run the following command : invoke yourService printB
You will get this :
Invocation results:
1: OK from instance #1..., Result: AAAAAA
invocation completed successfully.
So if your custom command's implementation is a Groovy closure, then its result is its return value.
And if your custom command's implementation is an external Groovy file, then its result is its last statement output.
HTH,
Tamir.

Resources