detect error on asynchronous write to influxdb using python influxdb_client - asynchronous

This is my code:
write_api = client.write_api(write_options=ASYNCHRONOUS)
write_api.write(bucket, org, data, write_precision=WritePrecision.US)
1 - How can I detect writing errors?
2 - Should I initialize write_api each time I want to write or I can initialize it once and use the same object all the time?

callback = write_api.write(bucket, org, data, write_precision=WritePrecision.US)
callback.wait()
callback.get()
Is the only way I found. Unfortunately the wait basically makes it synchronous decreasing the performance.

Related

How to turn off reasoning in the Grakn python client

I am using the Grakn python client and I want to query for data without reasoning turned on.
client = GraknClient(uri=uri)
session = client.session(keyspace=keyspace)
tx = session.transaction().read()
Do I pass an argument in the transaction() method?
You can turn the reasoning off for every specific query by passing infer=False parameter like this
transaction.execute(query, infer=True, explain=False, batch_size=50);
Check out the documentation http://dev.grakn.ai/docs/client-api/python#lazily-execute-a-graql-query

Count attempts in airflow sensor

I have a sensor that waits for a file to appear in an external file system
The sensor uses mode="reschedule"
I would like to trigger a specific behavior after X failed attempts.
Is there any straightforward way to know how many times the sensor has already attempted to run the poke method?
My quick fix so far has been to push an XCom with the attempt number, and increase it every time the poke method returns False. Is there any built-in mechanism for this?
Thank you
I had a similar problem when sensor mode = "reschedule", trying to poke a different path to a file based on the current time without directly referencing pendulum.now or datetime.now
I used task_reschedules (as done in the base sensor operator to get try_number for reschedule mode https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/sensors/base.html#BaseSensorOperator.execute)
def execute(self, context):
task_reschedules = TaskReschedule.find_for_task_instance(context['ti'])
self.poke_number = (len(task_reschedules) + 1)
super().execute(context)
then self.poke_number can be used within poke(), and current time is approximately execution_date + (poke_number * poke_interval).
Apparently, the XCom thing isn't working, because pushed XComs don't seem to be available between pokes; they always return undefined.
try_number inside task_instance doesn't help either, as pokes don't count as a new try number
I ended up computing the attempt number by hand:
attempt_no = math.ceil((pendulum.now(tz='utc') - kwargs['ti'].start_date).seconds / kwargs['task'].poke_interval)
The code will work fine as long as individual executions of the poke method don't last longer than the poke interval (which they shouldn't)
Best

Watson Conversation always returning first Variation text

We are using Watson Conversation from Python. Our dialog has responses with variation texts, but we always receive the first one variation -that is the problem. The dialog does work well when you run it from Bluemix Converation Tooling.
def wd_conv_send_message(sTexto):
# Replace with the context obtained from the initial request
context = {}
workspace_id = conv_workspaceid
response = conversation.message(
workspace_id=workspace_id,
message_input={'text': sTexto},
context=context
)
# print(json.dumps(response, indent=2))
print(response['output']['text'][0])
Change:
response = conversation.message(
workspace_id=workspace_id,
message_input={'text': sTexto},
context=context
)
to:
response = conversation.message(
workspace_id=workspace_id,
message_input={'text': sTexto},
context=context
)
context = response['context']
Conversation is stateless. So you need to send back the context you received or it won't know where to continue on from.
It turned out to be a somewhat erratic behaviour from Watson Conversation side, combined with debugging: if you run/debug from Pycharm -either setting Sequential or Random- you get only the very first Variation several times (five or more). But if you run from Python interpreter command line, it seems to work fine. So, I guess -just speculative- it has to do with some timing issue when running from Pycharm.

Invalidate/prevent memoize with plone.memoize.ram

I've and Zope utility with a method that perform network processes.
As the result of the is valid for a while, I'm using plone.memoize.ram to cache the result.
MyClass(object):
#cache(cache_key)
def do_auth(self, adapter, data):
# performing expensive network process here
...and the cache function:
def cache_key(method, utility, data):
return time() // 60 * 60))
But I want to prevent the memoization to take place when the do_auth call returns empty results (or raise network errors).
Looking at the plone.memoize code it seems I need to raise ram.DontCache() exception, but before doing this I need a way to investigate the old cached value.
How can I get the cached data from the cache storage?
I put this together from several code I wrote...
It's not tested but may help you.
You may access the cached data using the ICacheChooser utility.
It's call method needs the dotted name to the function you cached, in your case itself
key = '{0}.{1}'.format(__name__, method.__name__)
cache = getUtility(ICacheChooser)(key)
storage = cache.ramcache._getStorage()._data
cached_infos = storage.get(key)
In cached_infos there should be all infos you need.

Sending live-data from a servlet

I'm developing a web-application in which I have a constant stream of data that is being received every 5 seconds or so in a java servlet (being read from a file written by another application). I want to push this data onto an html page and get it read in javascript so I can graph it in the d3 library.
At the moment I'm using a javascript function that calls the 'doGet' function of the servlet every 5 seconds. I'm worried this is creating a lot of overhead, or that it could be performed more efficiently.
I know it's also possible to run "response.setIntHeader("Refresh", 5);" from the servlet.
Are there any other better ways?
Thanks in advance for the help!
Short polling is currently probably the most common approach to solving the problem you describe
If you can cope with a few seconds lag in notification, then short polling is really simple, here is a basic example:
On page load, call this in your JS:
setInterval(checkFor, 30000);
The above will call a function checkFor() every 30 seconds (obviously, you can change the 30 seconds to any length of time - just adjust the 30000 in the above line according to how regular you want users to be updated).
Then, in your checkForNotifications function, just make an ajax call to your server asking if there are any updates - if the server says yes, then just display the alert using JS, if not (which will be most of the time most likely) then do nothing:
function checkFor(){
$.ajax({
url: "your/server/url",
type: "POST",
success: function( notification ) {
//Check if any notifications are returned - if so then display alert
},
error: function(data){
//handle any error
}
});
}

Resources