I am having a function with the a default value of a param as datetime.now(). method is something like below,
def as_standard_format(p_date=datetime.now(), fmt=sdk_constants.DEFAULT_DATE_TIME_FORMAT):
return p_date.strftime(fmt)
And I am having a test method something like below,
#freeze_time(datetime(year=2018, month=9, day=25, hour=15, minute=46, second=36))
def testNowWithDefaultFormat(self):
# The below method gives the frozen time
print(datetime.now())
# But the below test fails
assert utils.format_date(
datetime.now()) == "20180925T154636Z", "Default call not giving current time in " \
"standard format \"%Y%m%dT%H%M%SZ\""
Why is the freeze_time not working with default param value?
You're being victim of a common gotcha in Python related to mutable default value parameters and not specific only to freeze_time.
For the explanation, consider the following code:
import time from datetime import datetime
def what_time_is_it(p_date=datetime.now()):
return p_date.strftime('"%Y-%m-%d %X"')
if __name__ == '__main__':
print(what_time_is_it())
time.sleep(10)
print(what_time_is_it())
What you might have expected to happen
"2019-02-28 13:47:24"
"2019-02-28 13:47:34" # This line showing a datetime 10 seconds after the first one
A new datetime.now() is called on every call to what_time_is_it()
What does happen
"2019-02-28 13:47:24"
"2019-02-28 13:47:24" # EXACTLY the same time in both lines!
A new datetime.now() is created once when the function is defined, and the same value is used in each successive call.
What you should do instead?
Create a new object each time the function is called, by using a default arg to signal that no argument was provided (None is often a good choice):
def as_standard_format(p_date=None, fmt=sdk_constants.DEFAULT_DATE_TIME_FORMAT):
if p_date is None:
return datetime.now().strftime(fmt)
else:
return p_date.strftime(fmt)
Related
When I am importing xlsx sheet than getting 'RecursionError: maximum recursion depth exceeded' error. I am using odoo v13. My aim is that when 'log_status' changed to 'Confirmed' state then one assigned method should be called. for this, I am using a write method for calling this method. My python code is below:
#api.model
def write(self, vals):
record = super(Transaction_log, self).write(vals)
if 'log_status' in vals and vals.get('log_status') == 'Confirmed':
self.action_confirm()
return record
def action_confirm(self):
self.write({'log_status': 'Confirmed'})
self.action_performed.create({'log_status': 'Confirmed', 'trans_log': self.id,
'performed_by': self.env.user.id, 'performed_time': datetime.now()})
return True
Thanks in advance.
There are few items where you may improve it.
Change api.model with api.multi.
Inside self.action_confirm() method, you have called write method again, 'log_status' match with 'Confirmed' condition. So it will be recursive.
To avoid it, we can use context to pass dummy flag.
Try with following code:
#api.multi
def write(self, vals):
record = super(Transaction_log, self).write(vals)
if 'log_status' in vals and vals.get('log_status') == 'Confirmed' and not self._context.get('by_pass_log_status'):
self.action_confirm()
return record
#api.multi
def action_confirm(self):
self.with_context('by_pass_log_status').write({'log_status': 'Confirmed'})
self.action_performed.create({'log_status': 'Confirmed', 'trans_log': self.id,
'performed_by': self.env.user.id, 'performed_time': datetime.now()})
return True
This is my operator:
bigquery_check_op = BigQueryOperator(
task_id='bigquery_check',
bql=SQL_QUERY,
use_legacy_sql = False,
bigquery_conn_id=CONNECTION_ID,
trigger_rule='all_success',
xcom_push=True,
dag=dag
)
When I check the Render page in the UI. Nothing appears there.
When I run the SQL in the console it return value 1400 which is correct.
Why the operator doesn't push the XCOM?
I can't use BigQueryValueCheckOperator. This operator is designed to FAIL against a check of value. I don't want nothing to fail. I simply want to branch the code based on the return value from the query.
Here is how you might be able to accomplish this with the BigQueryHook and the BranchPythonOperator:
from airflow.operators.python_operator import BranchPythonOperator
from airflow.contrib.hooks import BigQueryHook
def big_query_check(**context):
sql = context['templates_dict']['sql']
bq = BigQueryHook(bigquery_conn_id='default_gcp_connection_id',
use_legacy_sql=False)
conn = bq.get_conn()
cursor = conn.cursor()
results = cursor.execute(sql)
# Do something with results, return task_id to branch to
if results == 0:
return "task_a"
else:
return "task_b"
sql = "SELECT COUNT(*) FROM sales"
branching = BranchPythonOperator(
task_id='branching',
python_callable=big_query_check,
provide_context= True,
templates_dict = {"sql": sql}
dag=dag,
)
First we create a python callable that we can use to execute the query and select which task_id to branch too. Second, we create the BranchPythonOperator.
The simplest answer is because xcom_push is not one of the params in BigQueryOperator nor BaseOperator nor LoggingMixin.
The BigQueryGetDataOperator does return (and thus push) some data but it works by table and column name. You could chain this behavior by making the query you run output to a uniquely named table (maybe use {{ds_nodash}} in the name), and then use the table as a source for this operator, and then you can branch with the BranchPythonOperator.
You might instead try to use the BigQueryHook's get_conn().cursor() to run the query and work with some data inside the BranchPythonOperator.
Elsewhere we chatted and came up with something along the lines of this for putting in the callable of a BranchPythonOperator:
cursor = BigQueryHook(bigquery_conn_id='connection_name').get_conn().cursor()
# one of these two:
cursor.execute(SQL_QUERY) # if non-legacy
cursor.job_id = cursor.run_query(bql=SQL_QUERY, use_legacy_sql=False) # if legacy
result=cursor.fetchone()
return "task_one" if result[0] is 1400 else "task_two" # depends on results format
i'm trying to cache the return value of a function only in case it's not None.
in the following example, it makes sense to cache the result of someFunction in case it managed to obtain data from some-url for an hour.
if the data could not be obtained, it does not make sense to cache the result for an hour (or more), but probably for 5 minutes (so the server for some-domain.com has some time to recover)
def _cachekey(method, self, lang):
return (lang, time.time() // (60 * 60))
#ram.cache(_cachekey)
def someFunction(self, lang='en'):
data = urllib2.urlopen('http://some-url.com/data.txt', timeout=10).read()
except socket.timeout:
data = None
except urllib2.URLError:
data = None
return expensive_compute(data)
calling method(self, lang) in _cachekey would not make a lot of sense.
as this code would be too long for a comment, i'll post it here in hope it'll help others:
#initialize cache
from zope.app.cache import ram
my_cache = ram.RAMCache()
my_cache.update(maxAge=3600, maxEntries=20)
_marker = object()
def _cachekey(lang):
return (lang, time.time() // (60 * 60))
def someFunction(self, lang='en'):
cached_result = my_cache.query(_cacheKey(lang), _marker)
if cached_result is _marker:
#not found, download, compute and add to cache
data = urllib2.urlopen('http://some-url.com/data.txt', timeout=10).read()
except socket.timeout:
data = None
except urllib2.URLError:
data = None
if data is not None:
#cache computed value for 1 hr
computed = expensive_compute(data)
my_cache.set(data, (lang, time.time() // (60 * 60) )
else:
# allow download server to recover 5 minutes instead of trying to download on every page load
computed = None
my_cache.set(None, (lang, time.time() // (60 * 5) )
return computed
return cached_result
In this case, you should not generalize "return as None", as decorator cached results can depend only on input values.
Instead, you should build the caching mechanism inside your function and not rely on a decorator.
Then this becomes a generic non-Plone specific Python problem how to cache values.
Here is an example how to build your manual caching using RAMCache:
https://developer.plone.org/performance/ramcache.html#using-custom-ram-cache
In Numpy, I am trying to get the time of day for each element in an array of datatime64.
I could settle for a new array of timedelta64 which contains the time passed since the beginning of day for each element.
I already tried using numpy.datetime_as_string, but I don't know how to manipulate the string.
def datetime64_to_time_of_day(datetime64_array):
"""
Return a new array. For every element in datetime64_array return the time of day (since midnight).
>>> datetime64_to_time_of_day(np.array(['2012-01-02T01:01:01.001Z'],dtype='datetime64[ms]'))
array([3661001], dtype='timedelta64[ms]')
>>> datetime64_to_time_of_day(np.datetime64('2012-01-02T01:01:01.001Z','[ms]'))
numpy.timedelta64(3661001,'ms')
"""
day = datetime64_array.astype('datetime64[D]').astype(datetime64_array.dtype)
time_of_day = datetime64_array - day
return time_of_day
If using pandas, you can call pandas.Series.dt.time to get datetime.time objects.
I am trying to select data from a table in SQLite one row ONLY at a time for each call to the function, and I want the row to increment on each call (self.count is initialized elsewhere and 'line' is irrelevant here) I am using an adbapi connection pool in Twisted to connect to the DB. Here is the code I have tried:
def queryBTData4(self,line):
self.count=self.count+1
uuId=self.count
query="SELECT co2_data, patient_Id FROM btdata4 WHERE uid=:uid",{"uid": uuId}
d = self.dbpool.runQuery(query)
return d
This code works if I just set uid=1 or any other number in the DB (I used autoincrement for uid when I created the DB) but when I try to assign a value to uid (i.e. self.count via uuId) it reports that the operator has to be string or unicode.(I have tried both but it does not seem to help) However, I know that the query statement above works just fine in a previous program when I use a cursor and the execute command but I cannot see why it does not work here. I have tried all sorts of combinations and searched for a solution but have not found anything yet that works.(I have also tried the statement with brackets and other forms)
Thanks for any help or advice.
Here is the entire code:
from twisted.internet import protocol, reactor
from twisted.protocols import basic
from twisted.enterprise import adbapi
import sqlite3, time
class ServerProtocol(basic.LineReceiver):
def __init__(self):
self.conn = sqlite3.connect('biomed2.db',check_same_thread=False)
self.dbpool = adbapi.ConnectionPool("sqlite3" , 'biomed2.db', check_same_thread=False)
def connectionMade(self):
self.sendLine("conn made")
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
factory.originator = self
reactor.connectTCP('localhost', 1234, factory)
def lineReceived(self, line):
self._received = line
self.insertBTData4(self._received)
self.sendLine("line recvd")
def forwardLine(self, recipient):
recipient.sendLine(self._received)
def insertBTData4(self,data):
print "data in insert is",data
chx=data
PID=2
device_ID=5
query="INSERT INTO btdata4(co2_data,patient_Id, sensor_Id) VALUES ('%s','%s','%s')" % (chx, PID, device_ID)
dF = self.dbpool.runQuery(query)
return dF
class ClientProtocol(basic.LineReceiver):
def __init__(self):
self.conn = sqlite3.connect('biomed2.db',check_same_thread=False)
self.dbpool = adbapi.ConnectionPool("sqlite3" , 'biomed2.db', check_same_thread=False)
self.count=0
def connectionMade(self):
print "server-client made connection with client"
self.factory.originator.forwardLine(self)
#self.transport.loseConnection()
def lineReceived(self, line):
d=self.queryBTData4(self)
d.addCallbacks(self.sendData,self.printError )
def queryBTData4(self,line):
self.count=self.count+1
query=("SELECT co2_data, patient_Id FROM btdata4 WHERE uid=:uid",{"uid": uuId})
dF = self.dbpool.runQuery(query)
return dF
def sendData(self,line):
data=str(line)
self.sendLine(data)
def printError(self,error):
print "Got Error: %r" % error
error.printTraceback()
def main():
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
reactor.listenTCP(4321, factory)
reactor.run()
if __name__ == '__main__':
main()
The DB is created in another program, thus:
import sqlite3, time, string
conn = sqlite3.connect('biomed2.db')
c = conn.cursor()
c.execute('''CREATE TABLE btdata4
(uid INTEGER PRIMARY KEY, co2_data integer, patient_Id integer, sensor_Id integer)''')
The main program takes data into the server socket and inserts into DB. On the client socket side, data is removed from the DB one line at a time and sent to an external server. The program also has the ability to send data from the server side to the client side if required but I am not doing so here at the moment.
In queryBTData(), every time the function is called the count increments and I assign that value to uuId, which I then pass to the query. I have had this query statement working in a program where I do not use the adbapi but it does not seem to work here. I hope this is clear enough but if not please let me know and I will try again.
EDIT:
I have modified the program to take one row from the DB at a time (see queryBTData() below) but have come across another problem.
def queryBTData4(self,line):
self.count=self.count+1
xuId= self.count
#xuId=10
return self.dbpool.runQuery("SELECT co2_data FROM btdata4 WHERE uid = ?",xuId)
#return self.dbpool.runQuery("SELECT co2_data FROM btdata4 WHERE uid = 10")
When the count gets to 10 I get an error (which I will post below) which states that: "Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied"
I have tried setting xuId to 10 (see commented out line xuId=10) but I still get the same error. However, if I switch the return statements (to commented out return) I do indeed get correct row with no error. I have tried converting xuId to unicode but that makes no difference, I still get the same error. Basically, if I I set uid in the return statement to 10 or more (commented out return) it works, but if I set uid to xuId (i.e. uid=?,xuId) in the first return, it only works when xuId is below 10. The API documentation, as far as I can tell, gives no clue as to why this occurs.(I have also disabled the insert into DB to eliminate this and checked the SQLite3_ limit, which is 999)
Here are the errors I am getting when using the first return statement.
Got Error: <twisted.python.failure.Failure <class 'sqlite3.ProgrammingError'>>
Traceback (most recent call last):
File "c:\python26\lib\threading.py", line 504, in __bootstrap
self.__bootstrap_inner()
File "c:\python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "c:\python26\lib\threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
--- <exception caught here> ---
File "c:\python26\lib\site-packages\twisted\python\threadpool.py", line 207, i
n _worker
result = context.call(ctx, function, *args, **kwargs)
File "c:\python26\lib\site-packages\twisted\python\context.py", line 118, in c
allWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "c:\python26\lib\site-packages\twisted\python\context.py", line 81, in ca
llWithContext
return func(*args,**kw)
File "c:\python26\lib\site-packages\twisted\enterprise\adbapi.py", line 448, i
n _runInteraction
result = interaction(trans, *args, **kw)
File "c:\python26\lib\site-packages\twisted\enterprise\adbapi.py", line 462, i
n _runQuery
trans.execute(*args, **kw)
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current sta
tement uses 1, and there are 2 supplied.
Thanks.
Consider the API documentation for runQuery. Next, consider the difference between these three function calls:
c = a, b
f(a, b)
f((a, b))
f(c)
Finally, don't paraphrase error messages. Always quote them verbatim. Copy/paste whenever possible; make a note when you've manually transcribed them.